Another Example of how Implementing Cryptography is Tricky (and a Score 10 CVE)

It has recently been published the description of Zerologon, CVE-2020-1472 (see here for a summary and here for the technical paper), and do not worry since the bug has already been patched by Microsoft in August (see here). 

The bug allows anyone who can connect in TCP/RPC/Netlogon to an unpatched Active Directory domain controller to become a domain administrator, nothing else needed. The cause of this bug is a minor glitch in the implementation of the cryptographic algorithm AES-CFB8: the Initialisation Vector has been kept fixed at zero instead to be unique and randomly generated (more details are provided in the technical paper mentioned above). 

Phishing in the Clouds

Again on Phishing, this time with a new twist.

We all know and by now use complex SaaS Cloud services, like Microsoft’s Office 365, Google’s G Suite, Amazon services and so on. They are all very modular, meaning that there are multiple data storage services and multiple application services from which to choose and use. Often a user must authorise a Cloud App to access her/his own data, and the App can be also by an external provider (a “partner” of the service). The Authorisation is usually implemented with OAuth which, in a few words, is a secure delegation access protocol based on the exchange of cryptographic keys.

So what is the scam? Simple: you receive an email which looks like coming from [name your Cloud provider here] and asks you to authorise an App (which looks authentic) to access your data. You do not need to insert any password since you are already logged-in your Cloud service platform, but just to click on the button, and that’s it!

You have given access to all your Cloud data and services to a fraudster, who can get your data and act as you!

For more details read for example this article by ArsTechnica.

Security and Cryptography do not always go Hand in Hand with Availability

I have been reading this article by Ars Technica on smartphones’ 2FA Authenticator Apps and put it together with some thoughts about Hardware Security Keys (implementing the U2F [CTAP1] standard by the FIDO Alliance). In both scenarios, the security of the Second Factor Authentication (2FA) is based on the confidentiality of a cryptographic private key in the first case hold securely in the smartphone and in the second case in the USB hardware key. In other words, the private key cannot be read, copied or transferred to any other device, as it is also for any smart card (or integrated circuit card, ICC) or any Hardware Security Module (HSM).

Good for Security, but what about Availability?

Consider this very simple (and secure) scenario: my smartphone with a 2FA Authenticator App (or alternatively my U2F USB key) utterly breaks down: I have no way, at least given usual economical and time constraints, to recover it.

Now what about access to all those services which I set up in a very secure way with 2FA?

There is no access and no way immediately available to recover it! (But see below for more complete scenarios.)

The only possibility is to get a new hardware and perform a usually manual and lengthy process to regain access to each account and to configure the new hardware 2FA.

So Availability is lost and it could take quite some time and efforts to regain it.

The article by Ars Technica mentioned above, describes how some 2FA Authenticator Apps provide a solution to this problem: the idea is to install the App on multiple devices and securely copy the secret private key on each one of them. But now the secret private key, even if encrypted itself, is not anymore “hardware protected” since it is possible to copy it and transfer it. So Security/Confidentiality goes down but Availability goes up.

This process is obviously impossible as such with U2F hardware keys. In this case an alternative is to have multiple U2F hardware keys registered with all accounts and services so that if one key breaks one can always use a backup key to access the accounts and services. (For more info, see for example this support page by Google.)

Cryptography for a COVID-19 Contact Tracing App by Apple and Google

Apple and Google (in alphabetic order) have released a draft of a cryptographic protocol named Contact Tracing (here the specification) for a new Privacy-preserving Bluetooth protocol to support COVID-19 Contact Tracing. As far as I understand (please correct me if I have misunderstood something), it should work as follows:

  • Bluetooth LE is extended on the devices with this new procotol
  • A service provider distributes an App which makes use of the protocol and communicates with a server managed by the service provider or a third party
  • Users install the App on their devices and keep Bluetooth on
  • When two devices with the App installed are nearby, they exchange some locally generated cryptographic key material called Rolling Proximity Identifier: these identifiers are privacy preserving, that is from the identifier alone it is not possible to identify the device which originated it; all Rolling Proximity Identifiers are stored only locally on the devices themselves (both originator and receiver)
  • When a user tests positive to COVID-19, she or he inserts this information in the App which then generates a set of cryptographic key material called Diagnosis Keys corresponding to the days in which the users could have been already infected; the App then sends the Diagnosis Keys to the server which distributes them to all other devices on which the App is running
  • When an App receives from the server some Diagnosis Keys, it is able to compute a set of Rolling Proximity Identifiers and to check if at least one is present in the local storage; if there is a match, the information derived is that on a certain day, in a 10 minutes time interval, the user of the App has been in proximity with a person who later tested positive to COVID-19.

Obviously a Privacy pre-requisite to all this is that neither server nor App manage or store any other information or metadata about the users and the devices on which the App runs.

Another Nail in the Coffin of SHA1

A few days ago, a new attack has been made public which makes it easier to forge hash (or “message digests”) computed with SHA1 (see for example this article for more details).

This new collision attack makes it faster and less expensive to create two documents with the same digital signature using SHA1 or, having a digitally signed document where the digital signature uses SHA1 as hash algorithm, to create a different second document which has the same digital signature.

It has been known since a few years that SHA1 is broken and that it should not be used for digital signatures and in general for security purposes (actually NIST suggested to stop using SHA1 already in 2012). But as usual, in the real world there are still many, too many legacy applications which today continue to use SHA1, one example being legacy PGP keys and old PGP digital signatures.

This new result should be at least a useful reminder to all of us to check if we are still using SHA1 in some applications and in case finally update it.

Another nail in the coffin of SHA1

This recent paper “From Collisions to Chosen-Prefix Collisions – Application to Full SHA-1” by G. Leurent and T. Peyrin puts another nail in the coffin of SHA1. The authors present a chosen-prefix collisions attack to SHA1 which allows client impersonation in TLS 1.2 and peer impersonation in IKEv2 with an expected cost between 1.2 and 7 Million US$. The authors expect that soon it will be possible to bring down the cost to 100.000 US$ per collision.

For what concerns CA certificates, the attack allows, at the same cost, to create a rogue CA and fake certificates in which the signature includes the use of SHA1 hash, but only if the true CA does not randomize the serial number field in the certificates.

It is almost 15 years that it is known that SHA1 is not secure: NIST deprecated it in 2011, it should not have been used from 2013 and substituted with SHA2 or SHA3. By 2017 all browsers should have removed support for SHA1, but the problem is always with legacy applications that still use it: how many of them are still out there?

On Firefox “Send”

Mozilla has just released a new service, Firefox Send,  to share files with a higher level of security. Firefox Send is quite easy to use, just access the web-page and upload a file (up to 1GB, one needs to register to upload files up to 2,5GB). The service then returns a link to download the file which the user can choose to be valid up to 1 week and to 100 downloads. For an extra layer of security, the user can also add a Password which is then requested before the download.

Under the hood, the file is encrypted and decrypted in the browser of the user using the Web Crypto API and 128-bit AES-GCM. A short description of how encryption is implemented is provided in this page. The secret encryption key is generated by the browser and appended to the link returned by the server for the download, as in (this is not a valid URL)

h[…]s://send.firefox.com/download/b86[…]c92/#jRBroPSur7e8t_wI6-E67w

where the last part of the URL is the secret key.

This is very nice and simple, but to achieve a higher level of security the user has to find a secure way to share with her parties the download link, and sending it by email is not a good idea.

Obviously the use of a Password which can be communicated in other ways (eg. by telephone) makes it more secure. Still the Password is used to create a signing key (with HMAC SHA-256) and uploaded to the server. Then the server checks that a user requesting the file knows the Password by making her sign a nonce. So the Password is not used to encrypt the file but only for an authentication exchange.

I have not found a full security and threat scenario description for this service (some information can be found here and here), but it would be nice to know which are the use-cases that Mozilla has considered for it. Moreover, from a very quick look at the available documentation, it is not very clear to me which are the information that the server can access during the full life-cycle of an uploaded encrypted file.

In any case, Firefox Send seems to be a new and possibly very interesting competitor in the arena of online file sharing services together with Dropbox, Google Drive etc.

Recent Results on Information and Security

I recently read two articles which made me think that we still do not understand well enough what “information” is. Both articles consider ways of managing information by “side channels” or through “covert channels”. In other words, whatever we do, produces much more information than what we believe.

The first article is “Attack of the week: searchable encryption and the ever-expanding leakage function” by cryptographer Matthew Green in which he explains the results of this scientific article by P. Grubbs et al. The scenario is an encrypted database, that is a database where column data in a table is encrypted so that whoever accesses the DB has no direct access to the data (this is not the case where the database files are encrypted on the filesystem). The encryption algorithm is such that a remote client, who knows the encryption key, can make some simple kind of encrypted searches (queries) on the (encrypted) data, extracting the (encrypted) results. Only on the remote client data can be decrypted. Now an attacker (even a DB admin), under some mild assumptions, with some generic knowledge of the type of data in the DB and able to monitor which encrypted rows are the result of each query (of which she cannot read the parameters), applying some advanced statistical mathematics in learning theory, is anyway able to reconstruct with good precision the contents of the table. A simple example of this is a table containing the two columns employee_name and salary, both of them with encrypted values. In practice this means that this type of encryption leaks much more information than what we believed.

The second article is “ExSpectre: Hiding Malware in Speculative Execution” by J.Wampler et al. and, as the title suggests, is an extension of the Spectre CPU vulnerability. Also the Spectre and Meltdown attacks have to do with information management, but in these cases the information is managed internally in the CPU and it was supposed not to be accessible from outside it. In this particular article the idea is actually to hide information: the authors have devised a way of splitting a malware in two components, a “trigger” and a “payload”, such that both components appear to be benign to standard anti-virus and reverse engineering techniques. So the malware is hidden from view. When both components are executed on the same CPU, the trigger alters the internal state of the branch prediction of the CPU in such a way to make the payload execute malign code as a Spectre speculative execution. This does not alter the correct execution by the CPU of the payload program, but through Spectre, extra speculative instructions are executed and these, for example, can implement a reverse shell to give external access to the system to an attacker. Since the extra instructions are retired by the CPU at the end of the speculative execution, it appears as if they have never been executed and thus they seem to be untraceable. Currently this attack is mostly theoretical, difficult to implement and very slow. Still it is based on managing information in covert channels as both Spectre and Meltdown are CPU vulnerabilities which also exploit cache information side-channel attacks.

NIST Announces Second Round of Post-Quantum-Cryptography Standardization

NIST has announced the conclusion of the first round of the standardization process for post-quantum-cryptography algorithms, that is public key and digital signature algorithms which are not susceptible to attacks by quantum computers.

The announcement can be found here and a report on the 26 participants to the second round can be downloaded from here.

A Practical Look into GDPR for IT – Part 2

I have just published here the second article of my short series on the EU General Data Protection Regulation 2016/679 (GDPR) for IT.

In this article I discuss a few points about the risk-based approach requested by the GDPR which introduces the Data Protection Impact Assessment (DPIA), and a few IT security measures which should often be useful to mitigate risks to the personal data.