Fixing Cryptography is not Always Easy

The latest version of the Zloader banking malware is (also) exploiting a Microsoft Signature Verification bug (CVE-2013-3900) for which the bugfix exists since 2013 (see for example here for more details). In this case the security issue is not due to users not updating their systems with the mandatory security patches but to the fact that the patch is optional and should be installed manually.

The problem is that the stricter signature verification implemented by the Microsoft Authenticode patch which fixes this bug, has an extremely high risk of false positives in many situations, for example some installers can be identified as having an invalid signature. So Microsoft decided to let the user decide if the patch would create more problems than solving some.

The Zloader malware uses this “bug” to be able to run some modified (and then unsigned) libraries. But this requires that the malware is already on the system, so applying this patch does not prevent a system from being infested by this malware.

The issue that, again, this event points out, is how difficult it is to balance strict security, in particular if cryptography is involved, and usability / availability of systems and services.

CISA Catalogue of Known and Exploited Vulnerabilities

The Cybersecurity & Infrastructure Security Agency (CISA) has recently published the “Binding Operational Directive 22-01” which has the purpose of identifying the known and exploited vulnerabilities and address their resolution so to reduce the associated risks. 

In other words, CISA has identified the most risky and exploited vulnerabilities creating a catalogue (here) which can be used by everybody to identify the vulnerabilities which must be patched first. Indeed running a vulnerability scanner (or performing a penetration test) too often produces an extremely long list of vulnerabilities, classified by severity typically according to the CVSS-v3 standard: but which ones are really important / risky / even scary? A catalogue of vulnerabilities actually exploited by attackers can help to select the ones which really matter and that should be patched as-soon-as-possible.

Again Social Engineering and Fraud

Interesting article by Brian Krebs (here) about a social engineering fraud which obviously uses “human as the weakest link” but also some aspects of “using security to defeat security” itself.

In very few words, the scammer calls by phone the victim and asks the victim to prove to be the rightful owner of her/his bank account by providing the username and a code that she/he will receive as a 2nd factor authentication code. What the scammer is actually doing with the username and the 2FA code is to reset the password of the victim’s bank account and then to transfer some money out of the bank account. 

What goes wrong here is, first, that the victim should identify the caller, not viceversa, and that the victim should never divulge to a person a 2FA code. Thus by abusing the human weakest link and a “secure” reset password process, the scammer manages to perform the fraud.

On the technical side, one should be very careful on evaluating security risks associated to a self-service reset password process, including social engineering attacks like this one.

 

Risks of Cryptography

Cryptography is too often considered as the “final” solution for IT security. In reality cryptography is often useful, rarely easy to use and brings its own risks, including an “all-in or all-out” scenario.

Indeed suppose that your long-lived master key is exposed, then all possible security provided by cryptography is immediately lost, which it seems to be what happened in this case.

Solar Superstorms and IT BC/DR

Very interesting research paper with a scary title “Solar Superstorms: Planning for an Internet Apocalypse“. It is about a Black Swan event which has actually already happened in 1859, a major solar Corona Mass Ejection (CME) which has some chance to happen in the next future. Without entering in any detail (the research paper is quite readable) the main point is that if a CME of 1859’s magnitude would hit earth today, the consequences would be catastrophic.  Apart from the impacts on the electric grid, and in particular to the long distance power distribution (but power operator should be aware of this threat), the research paper points out that there would be severe damages to satellites, in particular low-orbit ones, with possible total failure of satellite communication including GPS, television broadcasting and data (internet) transmission. But equivalently at risk are long distance communication cables, more noticeable submarine optical fibre cables. Actually, optical fibres per se would not be affected, but optical repeaters along the fibres at distances of 50 – 150 km at the bottom of the oceans would burn out and stop almost all communication between continents.

I remember years ago discussing a similar scenario with some physicist friends and wondering if it could have been a threat or not. It seems that it can be, but is the cost of mitigating this threat worth it?  Should we act today?

On Evil Maid Attacks and Remote Working

I have just been reading this post by Dolos Group researchers (see here also for another report) on a physical attack to an unattended/stolen laptop which can go under the name of an “Evil Maid” attack. Besides the technical expertise required to perform the attack balanced by its simplicity and speed, the consequences can be dire: the attacker can fully access the company’s network as the owner of the laptop from her/his own device.

We very well know that IT remote working brings in quite a few extra risks from the security point of view, and the ones related to a physical attack are usually not the first or more worrisome. Managing the IT security of a standalone PC in a company’s office is quite different from managing the IT security of a remote working laptop. How can we provide and monitor the IT security of remote working devices? Which is the best approach, at least as of today? Is it better to strengthen the IT security of portable devices starting from their hardware and up, or give up completely on it and adopt virtual/cloud (and/or VDI) desktop solutions, both approaches coupled with zero-trust, multi-factor authentication, security monitoring etc.?

In any case, it seems clear that the IT security risks associated to remote working have not yet been really well appreciated, in particular coupled with the explosion of remote working due to the pandemic.

From DNS Attacks to Input Validation and (Zero) Trust

I have been reading this article on DNS attacks which also reminds us that if an attacker controls a DNS server or is able to poison a DNS cache, malware or at least unsafe data can be transmitted to a target application. DNS data is anyway data in input to an application, and as such it should be validated prior to any processing. Instead it can happen that it is considered as trusted data, in particular if DNSSEC is used. 

Unfortunately the situation is a little complex: with very few exceptions (as for example “DNS over HTTPS – DoH“), applications do not directly retrieve DNS data but make the Operating System do that for them. And any application trusts the Operating System on which is running, which should imply that data received from the Operating System should be trusted, or not? Which System Calls should an application trust and which it should not? Which kind of data validation should an application expect to be done by the Operating System on data passed through System Calls? The Devil and unfortunately the Security is in the details…

Which brings me to consider the concept of zero trust in this respect: first of all, in IT we always need to trust the security of something, at least of the Hardware (eg. the CPU), and unless we look at Confidential Computing (see here for example), then we need to trust also quite a lot of Software. 

So we are back to Input Validation, which can make applications bigger and slower and in some cases even more difficult to maintain, but accepting in input only data in the format and content that we expect, make for sure life much safer.

Apple Private Relay: Is TOR going mainstream?

Apple has announced (see here) a new iCloud premium feature called iCloud Private Relay:

when browsing with Safari, Private Relay ensures all traffic leaving a user’s device is encrypted, so no one between the user and the website they are visiting can access and read it, not even Apple or the user’s network provider. All the user’s requests are then sent through two separate internet relays. The first assigns the user an anonymous IP address that maps to their region but not their actual location. The second decrypts the web address they want to visit and forwards them to their destination. This separation of information protects the user’s privacy because no single entity can identify both who a user is and which sites they visit.

More information can be found for example here and here.

This seems very much a form of Onion Routing, which is all the theory and practice of TOR (The Onion Router, indeed). It will be very interesting to see how it will work out because there is the possibility of becoming a disruptive technology to improve the privacy and security of all of us when browsing the Internet.