Recent Results on Information and Security

I recently read two articles which made me think that we still do not understand well enough what “information” is. Both articles consider ways of managing information by “side channels” or through “covert channels”. In other words, whatever we do, produces much more information than what we believe.

The first article is “Attack of the week: searchable encryption and the ever-expanding leakage function” by cryptographer Matthew Green in which he explains the results of this scientific article by P. Grubbs et al. The scenario is an encrypted database, that is a database where column data in a table is encrypted so that whoever accesses the DB has no direct access to the data (this is not the case where the database files are encrypted on the filesystem). The encryption algorithm is such that a remote client, who knows the encryption key, can make some simple kind of encrypted searches (queries) on the (encrypted) data, extracting the (encrypted) results. Only on the remote client data can be decrypted. Now an attacker (even a DB admin), under some mild assumptions, with some generic knowledge of the type of data in the DB and able to monitor which encrypted rows are the result of each query (of which she cannot read the parameters), applying some advanced statistical mathematics in learning theory, is anyway able to reconstruct with good precision the contents of the table. A simple example of this is a table containing the two columns employee_name and salary, both of them with encrypted values. In practice this means that this type of encryption leaks much more information than what we believed.

The second article is “ExSpectre: Hiding Malware in Speculative Execution” by J.Wampler et al. and, as the title suggests, is an extension of the Spectre CPU vulnerability. Also the Spectre and Meltdown attacks have to do with information management, but in these cases the information is managed internally in the CPU and it was supposed not to be accessible from outside it. In this particular article the idea is actually to hide information: the authors have devised a way of splitting a malware in two components, a “trigger” and a “payload”, such that both components appear to be benign to standard anti-virus and reverse engineering techniques. So the malware is hidden from view. When both components are executed on the same CPU, the trigger alters the internal state of the branch prediction of the CPU in such a way to make the payload execute malign code as a Spectre speculative execution. This does not alter the correct execution by the CPU of the payload program, but through Spectre, extra speculative instructions are executed and these, for example, can implement a reverse shell to give external access to the system to an attacker. Since the extra instructions are retired by the CPU at the end of the speculative execution, it appears as if they have never been executed and thus they seem to be untraceable. Currently this attack is mostly theoretical, difficult to implement and very slow. Still it is based on managing information in covert channels as both Spectre and Meltdown are CPU vulnerabilities which also exploit cache information side-channel attacks.

Hardware Enclaves and Security

Hardware enclaves, such as Intel Software Guard Extension (SGX), are hardware security features of recent CPUs which allow the isolated execution of critical code. The typical threat model of hardware enclaves includes the totally isolated execution of trusted code in the enclave, considering all the rest of the code and data, operating system included, un-trusted. Software running in a hardware enclave has limited access to all data outside the enclave, whereas everything else does not have any access to what is inside the enclave, hypervisor, operating system and anti-virus included. Hardware enclaves can manage with very high security applications as password and secret-key managers, crypto-currency wallets, DRM etc.

But what could happen if a malware, for example a ransomware, is loaded in a hardware enclave?

First of all, a malware hidden in a hardware enclave cannot be detected since neither the hypervisor, operating system nor any kind of anti-virus can access it. The software to be loaded in a hardware enclave must be signed by a trusted entity, for example for SGX by Intel itself or by a trusted developer. This makes it more difficult to distribute hardware enclave malware, but not completely impossible. Finally, applications running inside a hardware enclave have very constrained access to the outside resources and it was believed that malware could use a hardware enclave (that is part of it could run in a hardware enclave) but that it was not possible for a malware to fully run inside an enclave without any component outside it.

M. Schwarz, S. Weiser and D. Gruss have instead recently shown in this paper that, at least theoretically, it is possible to create a super-malware run entirely from within a hardware enclave. This super-malware would be undetectable and could act as a normal malware on the rest of the system. At the moment countermeasures are not available, but similarly to the case of Spectre and Meltdown they could require hardware modification and/or have impact on the speed of the CPUs.

Da Multics a Meltdown e Spectre

Ultimamente ho dedicato del tempo alle vulnerabilità Hardware di quest’anno, principalmente Meltdown e Spectre nelle loro molteplici varianti.

Non ho aggiornato questo blog, ma ho pubblicato tre articoli dal titolo “L’Hardware e la sicurezza IT” sulla rivista online ICTSecurity. In questi articoli sono ripartito dagli anni ’60, in particolare da Multics, quando l’architettura e le funzioni di sicurezza dell’Hardware sono stati inizialmente disegnati, per arrivare a Row Hammer, gli attacchi alla Cache, Meltdown e Spectre.

Adesso ho sicuramente le idee un po’ più chiare sul significato ad oggi di queste vulnerabilità, anche se mi è molto meno chiaro cosa possano comportare nel futuro.

Meltdown and Spectre bugs should help improve IT Security

Yes, I want to be positive and look at a bright future. Everybody is now talking about the Meltdown and Spectre bugs (here the official site). I think that these Hardware bugs at the end will help improve the security of our IT systems. But we should not underestimate the pain that they could cause, even if it is too early to say this for certain since patches and countermeasures could be found for all systems and CPUs or, at the opposite, there could appear unexpected exploits.

The central issue is that IT and IT Security in particular, depend crucially on the correctness of the behaviour of the Hardware, first of all of the CPUs. If the foundation of the IT pillar is weak, sooner or later something will break. Let’s then hope that the Meltdown and Spectre bugs will help design more secure IT Hardware and, in the long run, improve IT Security as a whole.

Rowhammer, SGX and IT Security

I am following with interest the developments of the Rowhammer class of attacks and defenses, here there is one of the latest articles. (As far as I know, these are still more research subjects than real-life attacks.)

Already at the time of the Orange Book (or more correctly the “Trusted Computer System Evaluation Criteria – TCSEC”) in the ’80s, it was clear how important the hardware is in building the chain of trust on which IT Security relies.

Rowhammer attacks follow from a hardware security weakness, even if this weakness is also a hardware strength: the increase in density and decrease in size of DRAM cells, which allows to build memory banks with lower energy consumption and higher capacity. Unfortunately this allows the near-location memory bit-flipping that can give rise to a total compromise of the IT system, that is a Rowhammer attack. It is true that there exist memory banks with Error Correction Codes (ECC) which make the Rowhammer attacks quite hard, but these memory banks are more expensive, a little slower and available only on high-end server computers. One can look at it as a hardware feature which carried within an unexpected security weakness.

As it turns out, it seems very hard to find software measures which can detect, block or prevent Rowhammer attacks. Many different software defences have been proposed, but as of today none is really able to completely stop all Rowhammer types of attacks. A hardware weakness seems to require only hardware countermeasures.

To make the situation even more intriguing, the hardware-based Intel SGX security enclaves can be mixed-in in this scenario. Intel SGX is a hardware x86 instruction-set extension which allows to securely and confidentially execute programs in an isolated environment (called a “security enclave”). Nothing can directly look into a SGX security enclave, not even the Operating System, to the point that data can be computed in it even on systems controlled by an adversary (but SGX security enclaves are not immune from side-channel attacks). Rowhammer attacks cannot be performed from outside against programs running in a SGX security enclave. Vice-versa, a SGX security enclave in some conditions can run, without been detected, a Rowhammer software to attack the hardware and programs running on it. Overall it seems that Intel SGX security enclaves can provide extremely interesting IT security features but at the same time can also be abused to defeat IT security itself.

All of this becomes more worrisome when thinking of Virtual Machines and Cloud Services.

 

Is it truly impossible to separate VMs running on the same HW?

More and more results appear, like this last one, on weaknesses and vulnerabilities of Virtual Machines running on usual (commodity) hardware. The most troublesome results are not due to software vulnerabilities, but rely only on the hardware architecture which supports it. If cryptographic private keys can be stolen and covert channels can be established evading the current isolation mechanisms provided by hardware and virtualization software (see also my previous posts on Clouds), how much can we trust at least the IaaS Cloud Services?

Cloud Security and HW Security Features

Security is hard, we know quite well by now, but instead of getting easier it seems that, as time goes by, it is getting harder.

Consider Public Cloud: in a Public Cloud environment, the threat scenario is much more complex than in a dedicated, on-premises HW one. Assuming that in both cases the initial HW and SW configuration is secure, the threat scenario for services running on dedicated, on-premises HW consists of external attacks either directly from the network or mediated by the system users who could (unintentionally) download malware into the service. Instead the threat scenario in a Public Cloud environment must also include attacks from other services running on the same HW (other virtual machines, tenants, dockers etc.) and attacks from the infrastructure itself running the services (hypervisors on host machines).

Protecting virtual machines and cloud services from other machines and services running on the same HW and from the hypervisor is hard. New hardware features are needed to be able to effectively separate the guest services from each other and from the host. But even hardware features are not easy to design and implement to these purposes.

For example, Intel has introduced the SGX hardware extensions to create enclaves to manage in HW very sensitive data like cryptographic keys. In this paper it has been shown, as initially feared by Rutkowska, that these HW extensions both provide security features to the users but also to the attackers, who can exploit them to create practically invisible and undetectable malware. In the article it is actually shown in a particular scenario, how it is possible to recover some secret RSA keys used for digital signatures sealed in one enclave from another enclave. Since not even the hypervisor can see what there is in one enclave, the malware is practically undetectable.

IT security is a delicate balance between many factors, HW, SW, functionalities, human behaviour etc. and the more complex is this ecosystem, the easier it is to find loopholes in it and ways to abuse it.

Hardware Vulnerabilities (Again), Cloud and Mobile Security

Have a very Happy New Year!

… and to start 2017 on a great note, I write again about Hardware Vulnerabilities with comments on Cloud and Mobile Security.

The opportunity for this blog entry has been provided to me by the talk “What could possibly go wrong with <insert x86 instruction here>? Side effects include side-channel attacks and bypassing kernel ASLR” by Clémentine Maurice and Moritz Lipp at Chaos Computer Club 2016 which I suggest to watch (it lasts 50 minutes and it is not really technical despite its title).

A super-short summary of the talk is that it is possible to mount very effective side- (in particular time-) channel attacks on practically any modern Operating System which allow to extrafiliate data, open communication channel and spy on activities like keyboards inputs. All of this using only lecit commands and OS facilities, but in some innovative ways.

The reason for which these attacks are possibile is that the hardware does not prevent them, actually some hardware features, added to improve performances, make these attacks easier or even possible (see also my previous post on Hardware Vulnerabilities about this). So from the Security point of view these Hardware features should be considered as Vulnerabilities.

What is it possible to do with these techniques? Considering Cloud, it is possible to monitor the activities of another Virtual Machine running on the same hardware, extract secrect cryptography keys (but this depends on how the algorithm and protocols are implemented), establish hidden communication channels etc.

Similarly for Mobile, it is possible to have a totally lecit App to monitor the keyboard activity, or 2 Apps to establish a hidden communication so that one reads some data and the other sends it to a remote destination, all without violating any security rule (actually each one having very limited privilegies and restricted setups).

Morevoer it seeems easy to embed this kind of attacks in lecit applications and current anti-virus seem to lack the capabilities needed to intercept them. Indeed the activites performed to implement these attacks look almost identical to the ones performed by any program and it seems that only a particular performance monitoring could discover them.

 

On Denial of Service attacks and Hardware vulnerabilities

Denial of Service attacks are growning and getting the attention of the news: some of the latest incidents are krebonsecurity , OVH and Dyn. The economics behind these attacks are helping the attackers: today it costs little to mount a devastating DDoS attack able to block even a sizable part of Internet, thanks to all the botnets of unsafe machines, from PCs to routers and IoTs. Defence can be much more expensive than attack, and in some cases even than the ransom.

How did we get in this mess? This trend is not good at all, these attacks could threaten Internet itself, even if this would not be in the interest of the attackers (not considering State sponsored ones).

Fixing the current situation will be extremely expensive, many devices cannot be “fixed” but need just to be replaced. But before doing that, we need to build “secure” devices and design networks and protocols that support them and are somehow interoperable with the current ones. How? And When?

At the same time, a new trend is emerging: security vulnerabilities in Hardware.

The Rowhammer bug and its recent implementations in Virtual machines and Adroid phones (DRAMMER) or the ASLR vulnerability can open new scenarios. Hardware must provide the foundation of the security of all IT processing: data should be protected, accesses should be controlled etc. But we are discovering that the Hardware that we have been relying upon for the development of IT in the last 20 years, could have reached its limits. New security features are needed (see for example this) and vulnerabilities are discovered that must be managed, and not always it will be possible to fix them in software.

On Hardware Backdoors

Since at least the ’70s, the time of Multics (see eg. this old document on the vulnerability analysis of Multics security), the Orange Books, Military IT security etc., the role of hardware in IT security has been discussed, evaluated and implemented.

In the last years the discussion has risen again in particular about the possibility of hardware backdoors and malicious hardware. For example, since the publication of the Snowden documents there have been rumors about possible hardware backdoors in Intel, AMD and Cisco products.

A few days ago at the 2016 IEEE Symposium on Security and Privacy has been presented this paper (see eg. also here for a summary) describing how to implement a Hardware Backdoor called Analog Malicious Hardware which, as of today, seems practically impossible to detect.  The researchers were able to add a tiny circuit composed by a capacitor and a few transistors wrapped up in a single gate, out of the millions or billions in a modern chip, which acts as the hardware Trojan horse.

How difficult could it be to add a single, almost undetectable gate to the blue prints of a chip at the chip factory? How can be verified that similar gates are not present on a chip?

PS. 10 years ago I gave a couple of seminars in Italian about some aspects of history of IT security and I looked into some issues of how hardware must support the security features of Operating Systems; if interested some slides and a paper (in Italian) can be found here and here.