On Evil Maid Attacks and Remote Working

I have just been reading this post by Dolos Group researchers (see here also for another report) on a physical attack to an unattended/stolen laptop which can go under the name of an “Evil Maid” attack. Besides the technical expertise required to perform the attack balanced by its simplicity and speed, the consequences can be dire: the attacker can fully access the company’s network as the owner of the laptop from her/his own device.

We very well know that IT remote working brings in quite a few extra risks from the security point of view, and the ones related to a physical attack are usually not the first or more worrisome. Managing the IT security of a standalone PC in a company’s office is quite different from managing the IT security of a remote working laptop. How can we provide and monitor the IT security of remote working devices? Which is the best approach, at least as of today? Is it better to strengthen the IT security of portable devices starting from their hardware and up, or give up completely on it and adopt virtual/cloud (and/or VDI) desktop solutions, both approaches coupled with zero-trust, multi-factor authentication, security monitoring etc.?

In any case, it seems clear that the IT security risks associated to remote working have not yet been really well appreciated, in particular coupled with the explosion of remote working due to the pandemic.

Hardware Vulnerability of Some FIDO U2F Keys

Hardware Security Keys, like Google Titan Key or Yubico YubiKey, implementing the FIDO U2F protocol, provide what is consider possibly the most secure 2nd Factor Authentication (2FA) measure. Indeed the private key is protected in Hardware and should be impossible to copy, so that only the physical possession of the hardware token provides the authentication.

But a recent research (see here for the research paper, and here and here for some comments) shows that a class of chip (the NXP A700X) is vulnerable (CVE-2021-3011) to a physical hardware attack which allows to extract the private key from the chip itself, and then to clone the security key. To fully succeed, the attack requires to know the credential of the service(s) for which the security key works as 2FA and the physical availability of key itself for at least 10 hours. Then the security key is dismounted and the secret key is obtained by measuring the electromagnetic radiations emitted by the key during ECDSA signatures. Finally a new security key can be cloned with the stolen secret key.

From a theoretical point of view, this vulnerability violates the fundamental requirement of an hardware security key, that is that the private key cannot be extracted from the hardware in any way. But it should also be noted that the FIDO U2F protocol has some countermeasures which can be useful to mitigate this vulnerability, like the presence of a counter of the authentications done between a security key and a server so that a server can check if the security key is sending the correct next sequence number which would be different from the one provided by the cloned security key.

On practical terms, it all depends on the risks associated with the use of the security key and the possibility that someone will borrow your security key for at least 10 hours without anybody noticing it.  If this constitutes a real risk, then check if your security key is impacted by this vulnerability and in case it is, change it. Otherwise if this attack scenario is not a major threat, it should be reasonably safe to continue to use even vulnerable security keys for a little while, while keeping up to date with possible new developments or information from the security keys manufacturers. Even in this case, vulnerable security keys should anyway be changed as soon as convenient.

 

Always about Security Patching and Updates

These days I keep coming back to the “security patching and updates” issue. So I am going to add another couple of comments.

The first is about Ripple 20 (here the official link but the news is already wide spread) which carries an impressive number of “CVSS v3 base score 10.0” vulnerabilities. The question is again:

how can we secure all of these Million/Billion vulnerable devices since it seems very likely that security patching is not an option for most of them?

The second one is very hypothetical, that is in the “food for thought” class.

Assume, as some says, that in 2030 Quantum Computers will be powerful enough to break RSA and other asymmetrical cryptographic algorithms, and that at the same time (or just before) Post Quantum Cryptography will deliver us new secure algorithms to substitute RSA and friends. At first sight all looks ok: we will have just to do a lot of security patching/updating of servers, clients, applications, CA certificates, credit cards (hardware), telephone SIMs (hardware), security keys (hardware), Hardware Security Modules (HSM) and so on and on… But what about all those micro/embedded/IoT devices in which the current cryptographic algorithms are baked into? And all of those large devices (like aircrafts but also cars) which have been designed with cryptographic algorithms baked into them (no change possible)? We will probably have to choose between living dangerously or buy a new one. Or we could be forced to buy a new one, if the device will not be able to work anymore since its old algorithm will not be accepted by the rest of the world.

PS. Concerning Quantum Computers,  as far as I know nobody claims that a full Quantum Computer will be functioning by 2030, this is only the earliest possible estimate of arrival, but it could take much much longer, or even never!

PS. I deliberately do not want to consider the scenario in which full Quantum Computers are available and Post Quantum Cryptography is not.

Security and Cryptography do not always go Hand in Hand with Availability

I have been reading this article by Ars Technica on smartphones’ 2FA Authenticator Apps and put it together with some thoughts about Hardware Security Keys (implementing the U2F [CTAP1] standard by the FIDO Alliance). In both scenarios, the security of the Second Factor Authentication (2FA) is based on the confidentiality of a cryptographic private key in the first case hold securely in the smartphone and in the second case in the USB hardware key. In other words, the private key cannot be read, copied or transferred to any other device, as it is also for any smart card (or integrated circuit card, ICC) or any Hardware Security Module (HSM).

Good for Security, but what about Availability?

Consider this very simple (and secure) scenario: my smartphone with a 2FA Authenticator App (or alternatively my U2F USB key) utterly breaks down: I have no way, at least given usual economical and time constraints, to recover it.

Now what about access to all those services which I set up in a very secure way with 2FA?

There is no access and no way immediately available to recover it! (But see below for more complete scenarios.)

The only possibility is to get a new hardware and perform a usually manual and lengthy process to regain access to each account and to configure the new hardware 2FA.

So Availability is lost and it could take quite some time and efforts to regain it.

The article by Ars Technica mentioned above, describes how some 2FA Authenticator Apps provide a solution to this problem: the idea is to install the App on multiple devices and securely copy the secret private key on each one of them. But now the secret private key, even if encrypted itself, is not anymore “hardware protected” since it is possible to copy it and transfer it. So Security/Confidentiality goes down but Availability goes up.

This process is obviously impossible as such with U2F hardware keys. In this case an alternative is to have multiple U2F hardware keys registered with all accounts and services so that if one key breaks one can always use a backup key to access the accounts and services. (For more info, see for example this support page by Google.)

CacheOut: another speculative execution attack to CPUs

It was 2 years ago that we learned about Spectre and Meltdown, the first speculative attacks to CPUs which exploit hardware “bugs” and in particular the speculative and out-of-order execution features in modern CPUs. In the last 2 years a long list of attacks followed these initial two, and CacheOut is the last one.

CacheOut, here its own website and here an article about it, builds and improves over previous attacks and countermeasures like microcode updates provided by the CPUs makers, and Operating System patches. The CacheOut attack allows, in Intel CPUs, to read data from other processes including secret encryption keys, data from other Virtual Machines and the contents of the Intel’s secured SGX enclave.

Besides the effective consequences of this attack and the availability and effectiveness of  software countermeasures, it is important to remember that the only final solution to this class of attacks is the development and adoption of new and redesigned hardware CPUs. This will probably take still a few years and in the meantime we should adopt countermeasures based on risks’ evaluation so to isolate critical data and processes to dedicated CPUs or entire computers.

Privacy and VPN Routers for Personal Security

Though I do not have one nor I tried one, Privacy and VPN routers like InvizBox, Anonabox, NordVPN, TorGuard VPN, and many others from well known brands (see here for example for a review), are becoming more common, easy to use also when travelling, and features loaded.

They typically allow to easily create private or commercial VPNs, establish Tor circuits and implement privacy filters on internet traffic. They are probably not as tight as Tails, but I expect that they are user friendly. 

Though I never felt the need of a commercial VPN service, I would consider using a security and privacy internet router which I can carry with me and easily activate even when travelling.

Recent Results on Information and Security

I recently read two articles which made me think that we still do not understand well enough what “information” is. Both articles consider ways of managing information by “side channels” or through “covert channels”. In other words, whatever we do, produces much more information than what we believe.

The first article is “Attack of the week: searchable encryption and the ever-expanding leakage function” by cryptographer Matthew Green in which he explains the results of this scientific article by P. Grubbs et al. The scenario is an encrypted database, that is a database where column data in a table is encrypted so that whoever accesses the DB has no direct access to the data (this is not the case where the database files are encrypted on the filesystem). The encryption algorithm is such that a remote client, who knows the encryption key, can make some simple kind of encrypted searches (queries) on the (encrypted) data, extracting the (encrypted) results. Only on the remote client data can be decrypted. Now an attacker (even a DB admin), under some mild assumptions, with some generic knowledge of the type of data in the DB and able to monitor which encrypted rows are the result of each query (of which she cannot read the parameters), applying some advanced statistical mathematics in learning theory, is anyway able to reconstruct with good precision the contents of the table. A simple example of this is a table containing the two columns employee_name and salary, both of them with encrypted values. In practice this means that this type of encryption leaks much more information than what we believed.

The second article is “ExSpectre: Hiding Malware in Speculative Execution” by J.Wampler et al. and, as the title suggests, is an extension of the Spectre CPU vulnerability. Also the Spectre and Meltdown attacks have to do with information management, but in these cases the information is managed internally in the CPU and it was supposed not to be accessible from outside it. In this particular article the idea is actually to hide information: the authors have devised a way of splitting a malware in two components, a “trigger” and a “payload”, such that both components appear to be benign to standard anti-virus and reverse engineering techniques. So the malware is hidden from view. When both components are executed on the same CPU, the trigger alters the internal state of the branch prediction of the CPU in such a way to make the payload execute malign code as a Spectre speculative execution. This does not alter the correct execution by the CPU of the payload program, but through Spectre, extra speculative instructions are executed and these, for example, can implement a reverse shell to give external access to the system to an attacker. Since the extra instructions are retired by the CPU at the end of the speculative execution, it appears as if they have never been executed and thus they seem to be untraceable. Currently this attack is mostly theoretical, difficult to implement and very slow. Still it is based on managing information in covert channels as both Spectre and Meltdown are CPU vulnerabilities which also exploit cache information side-channel attacks.

Hardware Enclaves and Security

Hardware enclaves, such as Intel Software Guard Extension (SGX), are hardware security features of recent CPUs which allow the isolated execution of critical code. The typical threat model of hardware enclaves includes the totally isolated execution of trusted code in the enclave, considering all the rest of the code and data, operating system included, un-trusted. Software running in a hardware enclave has limited access to all data outside the enclave, whereas everything else does not have any access to what is inside the enclave, hypervisor, operating system and anti-virus included. Hardware enclaves can manage with very high security applications as password and secret-key managers, crypto-currency wallets, DRM etc.

But what could happen if a malware, for example a ransomware, is loaded in a hardware enclave?

First of all, a malware hidden in a hardware enclave cannot be detected since neither the hypervisor, operating system nor any kind of anti-virus can access it. The software to be loaded in a hardware enclave must be signed by a trusted entity, for example for SGX by Intel itself or by a trusted developer. This makes it more difficult to distribute hardware enclave malware, but not completely impossible. Finally, applications running inside a hardware enclave have very constrained access to the outside resources and it was believed that malware could use a hardware enclave (that is part of it could run in a hardware enclave) but that it was not possible for a malware to fully run inside an enclave without any component outside it.

M. Schwarz, S. Weiser and D. Gruss have instead recently shown in this paper that, at least theoretically, it is possible to create a super-malware run entirely from within a hardware enclave. This super-malware would be undetectable and could act as a normal malware on the rest of the system. At the moment countermeasures are not available, but similarly to the case of Spectre and Meltdown they could require hardware modification and/or have impact on the speed of the CPUs.

Da Multics a Meltdown e Spectre

Ultimamente ho dedicato del tempo alle vulnerabilità Hardware di quest’anno, principalmente Meltdown e Spectre nelle loro molteplici varianti.

Non ho aggiornato questo blog, ma ho pubblicato tre articoli dal titolo “L’Hardware e la sicurezza IT” sulla rivista online ICTSecurity. In questi articoli sono ripartito dagli anni ’60, in particolare da Multics, quando l’architettura e le funzioni di sicurezza dell’Hardware sono stati inizialmente disegnati, per arrivare a Row Hammer, gli attacchi alla Cache, Meltdown e Spectre.

Adesso ho sicuramente le idee un po’ più chiare sul significato ad oggi di queste vulnerabilità, anche se mi è molto meno chiaro cosa possano comportare nel futuro.

Meltdown and Spectre bugs should help improve IT Security

Yes, I want to be positive and look at a bright future. Everybody is now talking about the Meltdown and Spectre bugs (here the official site). I think that these Hardware bugs at the end will help improve the security of our IT systems. But we should not underestimate the pain that they could cause, even if it is too early to say this for certain since patches and countermeasures could be found for all systems and CPUs or, at the opposite, there could appear unexpected exploits.

The central issue is that IT and IT Security in particular, depend crucially on the correctness of the behaviour of the Hardware, first of all of the CPUs. If the foundation of the IT pillar is weak, sooner or later something will break. Let’s then hope that the Meltdown and Spectre bugs will help design more secure IT Hardware and, in the long run, improve IT Security as a whole.