Interesting reading: here the NSA “Quantum FAQs”.
Cryptography is too often considered as the “final” solution for IT security. In reality cryptography is often useful, rarely easy to use and brings its own risks, including an “all-in or all-out” scenario.
Indeed suppose that your long-lived master key is exposed, then all possible security provided by cryptography is immediately lost, which it seems to be what happened in this case.
Fully homomorphic encryption (FHE) would drastically improve the security of sensitive computations and in general of using Cloud and third party resources, by allowing to perform computations directly on encrypted data without the need to know the decryption key.
But as of today, fully homomorphic encryption is extremely inefficient making it impractical for general use. Still development is continuing and new results are achieved. For example recently IBM announced an Homomorphic Encryption Service, which is probably more like an online demo but the purpose seems to be to simplify the path for an early adoption by specially interested parties. But IBM is not alone and, among others, Google and Microsoft are developing Open Source libraries which can be used by a developer expert in cryptography to build for example end-to-end encrypted computation services where the users never need to share their keys with the service.
It has recently been published the description of Zerologon, CVE-2020-1472 (see here for a summary and here for the technical paper), and do not worry since the bug has already been patched by Microsoft in August (see here).
The bug allows anyone who can connect in TCP/RPC/Netlogon to an unpatched Active Directory domain controller to become a domain administrator, nothing else needed. The cause of this bug is a minor glitch in the implementation of the cryptographic algorithm AES-CFB8: the Initialisation Vector has been kept fixed at zero instead to be unique and randomly generated (more details are provided in the technical paper mentioned above).
Again on Phishing, this time with a new twist.
We all know and by now use complex SaaS Cloud services, like Microsoft’s Office 365, Google’s G Suite, Amazon services and so on. They are all very modular, meaning that there are multiple data storage services and multiple application services from which to choose and use. Often a user must authorise a Cloud App to access her/his own data, and the App can be also by an external provider (a “partner” of the service). The Authorisation is usually implemented with OAuth which, in a few words, is a secure delegation access protocol based on the exchange of cryptographic keys.
So what is the scam? Simple: you receive an email which looks like coming from [name your Cloud provider here] and asks you to authorise an App (which looks authentic) to access your data. You do not need to insert any password since you are already logged-in your Cloud service platform, but just to click on the button, and that’s it!
You have given access to all your Cloud data and services to a fraudster, who can get your data and act as you!
For more details read for example this article by ArsTechnica.
I have been reading this article by Ars Technica on smartphones’ 2FA Authenticator Apps and put it together with some thoughts about Hardware Security Keys (implementing the U2F [CTAP1] standard by the FIDO Alliance). In both scenarios, the security of the Second Factor Authentication (2FA) is based on the confidentiality of a cryptographic private key in the first case hold securely in the smartphone and in the second case in the USB hardware key. In other words, the private key cannot be read, copied or transferred to any other device, as it is also for any smart card (or integrated circuit card, ICC) or any Hardware Security Module (HSM).
Good for Security, but what about Availability?
Consider this very simple (and secure) scenario: my smartphone with a 2FA Authenticator App (or alternatively my U2F USB key) utterly breaks down: I have no way, at least given usual economical and time constraints, to recover it.
Now what about access to all those services which I set up in a very secure way with 2FA?
There is no access and no way immediately available to recover it! (But see below for more complete scenarios.)
The only possibility is to get a new hardware and perform a usually manual and lengthy process to regain access to each account and to configure the new hardware 2FA.
So Availability is lost and it could take quite some time and efforts to regain it.
The article by Ars Technica mentioned above, describes how some 2FA Authenticator Apps provide a solution to this problem: the idea is to install the App on multiple devices and securely copy the secret private key on each one of them. But now the secret private key, even if encrypted itself, is not anymore “hardware protected” since it is possible to copy it and transfer it. So Security/Confidentiality goes down but Availability goes up.
This process is obviously impossible as such with U2F hardware keys. In this case an alternative is to have multiple U2F hardware keys registered with all accounts and services so that if one key breaks one can always use a backup key to access the accounts and services. (For more info, see for example this support page by Google.)
Apple and Google (in alphabetic order) have released a draft of a cryptographic protocol named Contact Tracing (here the specification) for a new Privacy-preserving Bluetooth protocol to support COVID-19 Contact Tracing. As far as I understand (please correct me if I have misunderstood something), it should work as follows:
- Bluetooth LE is extended on the devices with this new procotol
- A service provider distributes an App which makes use of the protocol and communicates with a server managed by the service provider or a third party
- Users install the App on their devices and keep Bluetooth on
- When two devices with the App installed are nearby, they exchange some locally generated cryptographic key material called Rolling Proximity Identifier: these identifiers are privacy preserving, that is from the identifier alone it is not possible to identify the device which originated it; all Rolling Proximity Identifiers are stored only locally on the devices themselves (both originator and receiver)
- When a user tests positive to COVID-19, she or he inserts this information in the App which then generates a set of cryptographic key material called Diagnosis Keys corresponding to the days in which the users could have been already infected; the App then sends the Diagnosis Keys to the server which distributes them to all other devices on which the App is running
- When an App receives from the server some Diagnosis Keys, it is able to compute a set of Rolling Proximity Identifiers and to check if at least one is present in the local storage; if there is a match, the information derived is that on a certain day, in a 10 minutes time interval, the user of the App has been in proximity with a person who later tested positive to COVID-19.
Obviously a Privacy pre-requisite to all this is that neither server nor App manage or store any other information or metadata about the users and the devices on which the App runs.
A few days ago, a new attack has been made public which makes it easier to forge hash (or “message digests”) computed with SHA1 (see for example this article for more details).
This new collision attack makes it faster and less expensive to create two documents with the same digital signature using SHA1 or, having a digitally signed document where the digital signature uses SHA1 as hash algorithm, to create a different second document which has the same digital signature.
It has been known since a few years that SHA1 is broken and that it should not be used for digital signatures and in general for security purposes (actually NIST suggested to stop using SHA1 already in 2012). But as usual, in the real world there are still many, too many legacy applications which today continue to use SHA1, one example being legacy PGP keys and old PGP digital signatures.
This new result should be at least a useful reminder to all of us to check if we are still using SHA1 in some applications and in case finally update it.
This recent paper “From Collisions to Chosen-Prefix Collisions – Application to Full SHA-1” by G. Leurent and T. Peyrin puts another nail in the coffin of SHA1. The authors present a chosen-prefix collisions attack to SHA1 which allows client impersonation in TLS 1.2 and peer impersonation in IKEv2 with an expected cost between 1.2 and 7 Million US$. The authors expect that soon it will be possible to bring down the cost to 100.000 US$ per collision.
For what concerns CA certificates, the attack allows, at the same cost, to create a rogue CA and fake certificates in which the signature includes the use of SHA1 hash, but only if the true CA does not randomize the serial number field in the certificates.
It is almost 15 years that it is known that SHA1 is not secure: NIST deprecated it in 2011, it should not have been used from 2013 and substituted with SHA2 or SHA3. By 2017 all browsers should have removed support for SHA1, but the problem is always with legacy applications that still use it: how many of them are still out there?
Mozilla has just released a new service, Firefox Send, to share files with a higher level of security. Firefox Send is quite easy to use, just access the web-page and upload a file (up to 1GB, one needs to register to upload files up to 2,5GB). The service then returns a link to download the file which the user can choose to be valid up to 1 week and to 100 downloads. For an extra layer of security, the user can also add a Password which is then requested before the download.
Under the hood, the file is encrypted and decrypted in the browser of the user using the Web Crypto API and 128-bit AES-GCM. A short description of how encryption is implemented is provided in this page. The secret encryption key is generated by the browser and appended to the link returned by the server for the download, as in (this is not a valid URL)
where the last part of the URL is the secret key.
This is very nice and simple, but to achieve a higher level of security the user has to find a secure way to share with her parties the download link, and sending it by email is not a good idea.
Obviously the use of a Password which can be communicated in other ways (eg. by telephone) makes it more secure. Still the Password is used to create a signing key (with HMAC SHA-256) and uploaded to the server. Then the server checks that a user requesting the file knows the Password by making her sign a nonce. So the Password is not used to encrypt the file but only for an authentication exchange.
I have not found a full security and threat scenario description for this service (some information can be found here and here), but it would be nice to know which are the use-cases that Mozilla has considered for it. Moreover, from a very quick look at the available documentation, it is not very clear to me which are the information that the server can access during the full life-cycle of an uploaded encrypted file.
In any case, Firefox Send seems to be a new and possibly very interesting competitor in the arena of online file sharing services together with Dropbox, Google Drive etc.