The security of modern cryptography is based on number-theoretic computations so hard that the problems are practically impossible for attackers to solve. In practice this means that approaches and algorithms to crack the cryptographic algorithms are known but with the current best technologies it would take too many years to complete an attack.
But what if a shortcut is found at least in some particular cases?
This is exactly what some researches [article, arstechnica] have just found for the Diffie-Hellman (DH) algorithm with 1024bit keys, algorithm which is one of the pillars of the security of Web transactions among many other uses. The researchers have shown that for DH with 1024bit keys there exist some parameters (prime modulus) that allow with current technologies to compute the secret encryption keys in short time. In other words, some parameters adopted in DH-1024 can contain invisible trapdoors. The only ways to securely use DH today seem to be:
- to know how the parameters have been generated and to be sure that they do not allow for any “trapdoor”
- or to use DH with 2048bit or larger keys.
What does this teach us about the security that cryptography provides to everyday IT?
How should we implement and manage cryptography within IT security?
Is cryptography joining the “zero days => vulnerabilities => patch management” life-cycle which has become one of the landmarks of current IT security?
The Yahoo breach (see here for example) is almost yesterday news (today we are talking about DDoS: in 8 days the record went from 363Gpbs to 620Gpbs, and finally to almost 1Tbps, scary!), but I am now trying to picture such an event in view of the forthcoming European GDPR. My ideas are not too clear about what could be the consequences of the new Regulation (not of the data breach). I expect that in the next year before the Regulation will go into action, we’ll get a better understanding of its consequences.
A few days ago the European Parlament has adopted the “Network and Information Security (NIS)” Directive (PE-CONS 26/16 Lex 1683). Together with the recently approved “General Data Protection Regulation”, it could provide the EU marketplace with strong incentives to dramatically enlarge and improve the approach to IT and/or Cyber Security.
For both regulations the timeframe is probably long, at least 2 years, most probably 4, so we should understand the effects of these new regulations likely by 2020. Still the entire ecosystem of IT and/or Cyber Security can only benefit from this interest “from the top”.
Wired reports in this article of a recent advance in deployed cryptography by Google.
Last summer the NSA published an advisory about the need to develop and implement new crypto algorithms resistent to quantum computers. Indeed if and when quantum computers will arrive, they will be able to crack easily some of the most fundamental crypto algorithms in use, like RSA and Diffie Hellman. The development of quantum computers is slow, still it continues and it is reasonable to expect that sooner or later, some say in 20 years, they will become reality. Also the development of new crypto algorithms is slow, so the quest for crypto algorithms resistant to quantum computers, also called post-quantum crypto, has already been going on for a few years.
Very recently Google has announced the first real test case of one of these new post-quantum algorithms. Google will deploy to some Chrome Browsers an implementation of the Ring-LWE post-quantum algorithm. This algorithm will be used by the chosen test users, to connect to some Google services. Ring-LWE will be used together with the current crypto algorithms adopted by the browser. Composing the current algorithms with Ring-LWE will guarantee a combined level of security, that is the minimum level of security is that of the strongest algorithm used in the combination. It should be noted that Ring-LWE is a much more recent crypto algorithm compared to the standard ones, and its security has not been established yet to a comparable level of confidence.
If the level of security will not decrease and hopefully just increase, it has to be seen how it will work in practice in particular for performances.
For modern cryptography this two-year Google’s project could become a cornerstone for the development and deployment of post-quantum algorithms.
In the last months quite a long list of critical vulnerabilities in security products have been made public, for example in products by FireEye, Kaspersky Lab, McAfee, Sophos, Symantec, Trend Micro etc. Wired just published this article with further information and some comments. These incidents make me think if writing secure code is just too difficult for anyone, or if there is something fundamentally wrong in how the IT industry in general and the IT Security industry in particular, is setup.
The security researcher Gal Beniamini has just published here the results of his investigation on the security of Android’s Full Disk Encrytion and found a way to get around it on smartphones and tablets based on the Qualcomm Snapdragon chipset.
The cryptography is ok but some a priori minor implementation details give the possibility to resourceful attackers (like state / nation agencies or well funded organized crime groups) of extracting the secret keys which should be protected in hardware. The knowledge of these keys would allow to decrypt the data in the file systems, the very issue which has been at the basis of the famous Apple vs. FBI case a few months ago.
Software patches have been released by Google and Qualcomm but, as usual with smartphones and tablets, it is not clear how many afflicted devices have received the update or will ever receive it.
In a few words, the problem lies in the interface between the Qualcomm’s hardware module, called the KeyMaster module, which generates, manages and protects the secret keys and the Android Operating System that needs to indirectly access the keys in this case to encrypt and decrypt the file-system. Some KeyMaster’s functions used by Android can be abused to make them reveal the secret keys.
This is another case which proves how it is difficult to implement cryptography right.
Since at least the ’70s, the time of Multics (see eg. this old document on the vulnerability analysis of Multics security), the Orange Books, Military IT security etc., the role of hardware in IT security has been discussed, evaluated and implemented.
In the last years the discussion has risen again in particular about the possibility of hardware backdoors and malicious hardware. For example, since the publication of the Snowden documents there have been rumors about possible hardware backdoors in Intel, AMD and Cisco products.
A few days ago at the 2016 IEEE Symposium on Security and Privacy has been presented this paper (see eg. also here for a summary) describing how to implement a Hardware Backdoor called Analog Malicious Hardware which, as of today, seems practically impossible to detect. The researchers were able to add a tiny circuit composed by a capacitor and a few transistors wrapped up in a single gate, out of the millions or billions in a modern chip, which acts as the hardware Trojan horse.
How difficult could it be to add a single, almost undetectable gate to the blue prints of a chip at the chip factory? How can be verified that similar gates are not present on a chip?
PS. 10 years ago I gave a couple of seminars in Italian about some aspects of history of IT security and I looked into some issues of how hardware must support the security features of Operating Systems; if interested some slides and a paper (in Italian) can be found here and here.
I just published a short article that can be downloaded here , about IT Security in the advent of Agile and DevOps development processes.
I tried to give a high level overview of the new opportunities and of the new and returning risks that Agile and DevOps bring to IT security management and governance. This requires that the IT security practitioners find new continuous and adaptive ways to provide to business the security of IT systems.
From the APWG press release: “The Anti-Phishing Working Group (APWG) observed more phishing attacks in the first quarter of 2016 than at any other time in history” (here is the full report).
This is hardly surprising, but it quantifies with numbers the latest news about online frauds, like the “CEO Fraud”, the “Business Email Compromise” (eg. see this FBI announcement) etc.
It just became public that a custom built Linux kernel for embedded devices has been shipped and installed in production with a root debug backdoor open to anyone, see here for the announcement and for example here for some more details.
Besides the gravity of this particular incident and the difficulty of remediating it (I expect that many devices shipped with this kernel will never be updated) a couple of considerations come to my mind:
- first of all the need for IT Security Awareness and Education starting from everybody working in IT : anybody can make a mistake or even a blunder, but there should be safety nets proportional to the risks and IT professional should always be aware of the “security” consequences of what they do;
- the process of “bringing into production” IT products (aka Change Management) should be improved: as of today most of the time the really important test of an IT product is the final User Acceptance Test, which means that it is only important that the features requested by the final users work as expected. But this is not enough, and it is not like this in many other industries, think for example of televisions, refrigerators, cars etc. they all need to pass safety tests and be labelled accordingly otherwise they cannot be sold on the market. Why is it not like this also for IT products? As of today it is difficult to think of security standards, tests and labels common to all IT products, but it should be possible to agree on and adopt some common IT security baseline.