Cloud Security and HW Security Features

Security is hard, we know quite well by now, but instead of getting easier it seems that, as time goes by, it is getting harder.

Consider Public Cloud: in a Public Cloud environment, the threat scenario is much more complex than in a dedicated, on-premises HW one. Assuming that in both cases the initial HW and SW configuration is secure, the threat scenario for services running on dedicated, on-premises HW consists of external attacks either directly from the network or mediated by the system users who could (unintentionally) download malware into the service. Instead the threat scenario in a Public Cloud environment must also include attacks from other services running on the same HW (other virtual machines, tenants, dockers etc.) and attacks from the infrastructure itself running the services (hypervisors on host machines).

Protecting virtual machines and cloud services from other machines and services running on the same HW and from the hypervisor is hard. New hardware features are needed to be able to effectively separate the guest services from each other and from the host. But even hardware features are not easy to design and implement to these purposes.

For example, Intel has introduced the SGX hardware extensions to create enclaves to manage in HW very sensitive data like cryptographic keys. In this paper it has been shown, as initially feared by Rutkowska, that these HW extensions both provide security features to the users but also to the attackers, who can exploit them to create practically invisible and undetectable malware. In the article it is actually shown in a particular scenario, how it is possible to recover some secret RSA keys used for digital signatures sealed in one enclave from another enclave. Since not even the hypervisor can see what there is in one enclave, the malware is practically undetectable.

IT security is a delicate balance between many factors, HW, SW, functionalities, human behaviour etc. and the more complex is this ecosystem, the easier it is to find loopholes in it and ways to abuse it.

On SHA1, Software Development and Security

It is a few years that it is known that the SHA1 Cryptographic Hash Algorithm is weak, and from 2012 NIST has suggested to substitute it with SHA256 or other secure hash algorithms. Just a few days ago it has been announced the first example of this weakness, the first computed SHA1 “collision”.

Since many years have passed from the discovery of SHA1 weaknesses and some substitutes without known weaknesses are available, one would expect that almost no software is using SHA1 nowadays.

Unfortunately reality is quite the opposite: many applications depend on SHA1 in critical ways, to the point of crashing badly if they encounter a SHA1 collision. The first to fall to this has been the WebKit browser engine source code repository due to the reliance of Apache SVN on SHA1 (see eg. here).  But also Git depends on SHA1 and one of the most famous adopters of Git is the Linux kernel repository (actually Linus Torvalds created Git to manage the Linux kernel source code).

For some applications to substitute SHA1 with another Hash algorithm requires to rewrite extensively large parts of the source code. This requires time, expertise and money (probably not in this order) and does not add any new features to the application! So unless it is really necessary or no way to keep using SHA1 and avoid the “collisions” is found, nobody really considers to do the substitution. (By the way, it seems that there are easy ways of adding controls to avoid the above mentioned “collisions”, so “sticking plasters” are currently applied to applications adopting SHA1).

But if we think about this issue from a “secure software development” point of view, there should not be any problem in substituting SHA1 with another Hash algorithm. Indeed designing software in a modular way and keeping in mind that cryptographic algorithms have a limited time life expectancy, it should be planned from the beginning of the software development cycle how to proceed to substitute one cryptographic algorithm with another of the same class but “safer” (whatever that means in each case).

Obviously this is not yet the case for many applications, which means that we have still to learn quite a bit on how to design and write “secure” software.

Hardware Vulnerabilities (Again), Cloud and Mobile Security

Have a very Happy New Year!

… and to start 2017 on a great note, I write again about Hardware Vulnerabilities with comments on Cloud and Mobile Security.

The opportunity for this blog entry has been provided to me by the talk “What could possibly go wrong with <insert x86 instruction here>? Side effects include side-channel attacks and bypassing kernel ASLR” by Clémentine Maurice and Moritz Lipp at Chaos Computer Club 2016 which I suggest to watch (it lasts 50 minutes and it is not really technical despite its title).

A super-short summary of the talk is that it is possible to mount very effective side- (in particular time-) channel attacks on practically any modern Operating System which allow to extrafiliate data, open communication channel and spy on activities like keyboards inputs. All of this using only lecit commands and OS facilities, but in some innovative ways.

The reason for which these attacks are possibile is that the hardware does not prevent them, actually some hardware features, added to improve performances, make these attacks easier or even possible (see also my previous post on Hardware Vulnerabilities about this). So from the Security point of view these Hardware features should be considered as Vulnerabilities.

What is it possible to do with these techniques? Considering Cloud, it is possible to monitor the activities of another Virtual Machine running on the same hardware, extract secrect cryptography keys (but this depends on how the algorithm and protocols are implemented), establish hidden communication channels etc.

Similarly for Mobile, it is possible to have a totally lecit App to monitor the keyboard activity, or 2 Apps to establish a hidden communication so that one reads some data and the other sends it to a remote destination, all without violating any security rule (actually each one having very limited privilegies and restricted setups).

Morevoer it seeems easy to embed this kind of attacks in lecit applications and current anti-virus seem to lack the capabilities needed to intercept them. Indeed the activites performed to implement these attacks look almost identical to the ones performed by any program and it seems that only a particular performance monitoring could discover them.

 

On Denial of Service attacks and Hardware vulnerabilities

Denial of Service attacks are growning and getting the attention of the news: some of the latest incidents are krebonsecurity , OVH and Dyn. The economics behind these attacks are helping the attackers: today it costs little to mount a devastating DDoS attack able to block even a sizable part of Internet, thanks to all the botnets of unsafe machines, from PCs to routers and IoTs. Defence can be much more expensive than attack, and in some cases even than the ransom.

How did we get in this mess? This trend is not good at all, these attacks could threaten Internet itself, even if this would not be in the interest of the attackers (not considering State sponsored ones).

Fixing the current situation will be extremely expensive, many devices cannot be “fixed” but need just to be replaced. But before doing that, we need to build “secure” devices and design networks and protocols that support them and are somehow interoperable with the current ones. How? And When?

At the same time, a new trend is emerging: security vulnerabilities in Hardware.

The Rowhammer bug and its recent implementations in Virtual machines and Adroid phones (DRAMMER) or the ASLR vulnerability can open new scenarios. Hardware must provide the foundation of the security of all IT processing: data should be protected, accesses should be controlled etc. But we are discovering that the Hardware that we have been relying upon for the development of IT in the last 20 years, could have reached its limits. New security features are needed (see for example this) and vulnerabilities are discovered that must be managed, and not always it will be possible to fix them in software.

On the Security of Modern Cryptography

The security of modern cryptography is based on number-theoretic computations so hard that the problems are practically impossible for attackers to solve. In practice this means that approaches and algorithms to crack the cryptographic algorithms are known but with the current best technologies it would take too many years to complete an attack.

But what if a shortcut is found at least in some particular cases?

This is exactly what some researches [article, arstechnica] have just found for the Diffie-Hellman (DH) algorithm with 1024bit keys, algorithm which is one of the pillars of the security of Web transactions among many other uses. The researchers have shown that for DH with 1024bit keys there exist some parameters (prime modulus) that allow with current technologies to compute the secret encryption keys in short time. In other words, some parameters adopted in DH-1024 can contain invisible trapdoors. The only ways to securely use DH today seem to be:

  • to know how the parameters have been generated and to be sure that they do not allow for any “trapdoor”
  • or to use DH with 2048bit or larger keys.

What does this teach us about the security that cryptography provides to everyday IT?

How should we implement and manage cryptography within IT security?

Is cryptography joining the “zero days => vulnerabilities => patch management” life-cycle which has become one of the landmarks of current IT security?

Yahoo Breach and GDPR

The Yahoo breach (see here for example) is almost yesterday news (today we are talking about DDoS: in 8 days the record went from 363Gpbs to 620Gpbs, and finally to almost 1Tbps, scary!), but I am now trying to picture such an event in view of the forthcoming European GDPR. My ideas are not too clear about what could be the consequences of the new Regulation (not of the data breach). I expect that in the next year before the Regulation will go into action, we’ll get a better understanding of its consequences.

The EU’s Network and Information Security (NIS) Directive

A few days ago the European Parlament has adopted the “Network and Information Security (NIS)” Directive (PE-CONS 26/16 Lex 1683). Together with the recently approved “General Data Protection Regulation”, it could provide the EU marketplace with strong incentives to dramatically enlarge and improve the approach to IT and/or Cyber Security.

For both regulations the timeframe is probably long, at least 2 years, most probably 4, so we should understand the effects of these new regulations likely by 2020. Still the entire ecosystem of IT and/or Cyber Security can only benefit from this interest “from the top”.

New Developments in Cryptography

Wired reports in this article of a recent advance in deployed cryptography by Google.

Last summer the NSA published an advisory about the need to develop and implement new crypto algorithms resistent to quantum computers. Indeed if and when quantum computers will arrive, they will be able to crack easily some of the most fundamental crypto algorithms in use, like RSA and Diffie Hellman. The development of quantum computers is slow, still it continues and it is reasonable to expect that sooner or later, some say in 20 years, they will become reality. Also the development of new crypto algorithms is slow, so the quest for crypto algorithms resistant to quantum computers, also called post-quantum crypto, has already been going on for a few years.

Very recently Google has announced the first real test case of one of these new post-quantum algorithms. Google will deploy to some Chrome Browsers an implementation of the Ring-LWE post-quantum algorithm. This algorithm will be used by the chosen test users, to connect to some Google services. Ring-LWE will be used together with the current crypto algorithms adopted by the browser. Composing the current algorithms with Ring-LWE will guarantee a combined level of security, that is the minimum level of security is that of the strongest algorithm used in the combination. It should be noted that Ring-LWE is a much more recent crypto algorithm compared to the standard ones, and its security has not been established yet to a comparable level of confidence.

If the level of security will not decrease and hopefully just increase, it has to be seen how it will work in practice in particular for performances.

For modern cryptography this two-year Google’s project could become a cornerstone for the development and deployment of post-quantum algorithms.

How Secure are the Products of the IT Security Industry?

In the last months quite a long list of critical vulnerabilities in security products have been made public, for example in products by  FireEye, Kaspersky Lab, McAfee, Sophos, Symantec, Trend Micro etc. Wired just published this article with further information and some comments. These incidents make me think if writing secure code is just too difficult for anyone, or if there is something fundamentally wrong in how the IT industry in general and the IT Security industry in particular, is setup.

Implementing Cryptography right is hard

The security researcher Gal Beniamini has just published here the results of his investigation on the security of Android’s Full Disk Encrytion and found a way to get around it on smartphones and tablets based on the Qualcomm Snapdragon chipset.

The cryptography is ok but some a priori minor implementation details give the possibility to resourceful attackers (like state / nation agencies or well funded organized crime groups) of extracting the secret keys which should be protected in hardware. The knowledge of these keys would allow to decrypt the data in the file systems, the very issue which has been at the basis of the famous Apple vs. FBI case a few months ago.

Software patches have been released by Google and Qualcomm but, as usual with smartphones and tablets, it is not clear how many afflicted devices have received the update or will ever receive it.

In a few words, the problem lies in the interface between the Qualcomm’s hardware module, called the KeyMaster module, which generates, manages and protects the secret keys and the Android Operating System that needs to indirectly access the keys in this case to encrypt and decrypt the file-system. Some KeyMaster’s functions used by Android can be abused to make them reveal the secret keys.

This is another case which proves how it is difficult to implement cryptography right.