Apple Private Relay: Is TOR going mainstream?

Apple has announced (see here) a new iCloud premium feature called iCloud Private Relay:

when browsing with Safari, Private Relay ensures all traffic leaving a user’s device is encrypted, so no one between the user and the website they are visiting can access and read it, not even Apple or the user’s network provider. All the user’s requests are then sent through two separate internet relays. The first assigns the user an anonymous IP address that maps to their region but not their actual location. The second decrypts the web address they want to visit and forwards them to their destination. This separation of information protects the user’s privacy because no single entity can identify both who a user is and which sites they visit.

More information can be found for example here and here.

This seems very much a form of Onion Routing, which is all the theory and practice of TOR (The Onion Router, indeed). It will be very interesting to see how it will work out because there is the possibility of becoming a disruptive technology to improve the privacy and security of all of us when browsing the Internet.  

Passwordless Authentication

Recently I frequently met discussions about passwordless authentication: is this myth finally becoming reality? It is at least 20 years that we have been discussing and announcing the demise of passwords.

Passwords can be substituted by biometrics, but also hardware tokens (eg. security keys), smartphones etc. together with authenticator apps, single-sign-on, identity federation and so on.

Is this enough to get rid of passwords?

Well, passwords are very cheap to manage and very scalable, well known, used and abused, possible to forget but not to break down or to be physically lost or stolen. And most systems will still use passwords / PIN codes as backup.

Already today access to most personal devices (smartphones, tablets, portables etc.) is passwordless, usually by biometrics, with password as backup. But this is very local to each personal device and it seems difficult to scale it up to all systems and applications.

So where do we really stand on the way to “passwordlessness”? How and when will we get there?

More Side-Channel Attacks

Side channel attacks have always been there, but with Spectre and Meltdown we reached a new level of complexity, danger and pervasiveness. Somehow one of the main ingredients of this family of attacks is to measure the time (or time difference) it takes to process/compute some data and from this to infer information about the data itself. In the 3 years since the announcement of Spectre and Meltdown, a lot of research has been done on this area to find more hardware components suitable to time measurements which can be exploited, to improve the efficiency of the attacks (for which Machine Learning is of great help), and to understand if this type of attacks can become a real everyday threat for everyone. There are recent results in all directions: “Lord of the Ring(s): Side Channel Attacks on the CPU On-Chip Ring Interconnect Are Practical” exploits the contention on the CPU ring interconnect (see here and here for details and the research paper), whereas Google Security made further progress in implementing Spectre against the Chrome Browser using Javascript (here the blog announcement) and other researchers have discovered a way of performing a side channel attack in a Web Browser with Javascript completely disabled (see here and here for details and the research paper) which can be used as an alternative way of to tracking users online. In particular the latter result is worrisome: it seems that the possibility of being subject to this kind of attacks in everyday life, that is browsing in internet on websites, is getting closer.

Hardware Vulnerability of Some FIDO U2F Keys

Hardware Security Keys, like Google Titan Key or Yubico YubiKey, implementing the FIDO U2F protocol, provide what is consider possibly the most secure 2nd Factor Authentication (2FA) measure. Indeed the private key is protected in Hardware and should be impossible to copy, so that only the physical possession of the hardware token provides the authentication.

But a recent research (see here for the research paper, and here and here for some comments) shows that a class of chip (the NXP A700X) is vulnerable (CVE-2021-3011) to a physical hardware attack which allows to extract the private key from the chip itself, and then to clone the security key. To fully succeed, the attack requires to know the credential of the service(s) for which the security key works as 2FA and the physical availability of key itself for at least 10 hours. Then the security key is dismounted and the secret key is obtained by measuring the electromagnetic radiations emitted by the key during ECDSA signatures. Finally a new security key can be cloned with the stolen secret key.

From a theoretical point of view, this vulnerability violates the fundamental requirement of an hardware security key, that is that the private key cannot be extracted from the hardware in any way. But it should also be noted that the FIDO U2F protocol has some countermeasures which can be useful to mitigate this vulnerability, like the presence of a counter of the authentications done between a security key and a server so that a server can check if the security key is sending the correct next sequence number which would be different from the one provided by the cloned security key.

On practical terms, it all depends on the risks associated with the use of the security key and the possibility that someone will borrow your security key for at least 10 hours without anybody noticing it.  If this constitutes a real risk, then check if your security key is impacted by this vulnerability and in case it is, change it. Otherwise if this attack scenario is not a major threat, it should be reasonably safe to continue to use even vulnerable security keys for a little while, while keeping up to date with possible new developments or information from the security keys manufacturers. Even in this case, vulnerable security keys should anyway be changed as soon as convenient.

 

News on Fully Homomorphic Encryption

Fully homomorphic encryption (FHE) would drastically improve the security of sensitive computations and in general of using Cloud and third party resources, by allowing to perform computations directly on encrypted data without the need to know the decryption key. 

But as of today, fully homomorphic encryption is extremely inefficient making it impractical for general use. Still development is continuing and new results are achieved. For example recently IBM announced an Homomorphic Encryption Service, which is probably more like an online demo but the purpose seems to be to simplify the path for an early adoption by specially interested parties. But IBM is not alone and, among others, Google and Microsoft are developing Open Source libraries which can be used by a developer expert in cryptography to build for example end-to-end encrypted computation services where the users never need to share their keys with the service.

ATP Attacks and Single Point of Failure

We are all following the development of the “SolarWinds incident” but one comment comes to my mind (see also this Advisory from NSA).

There is a very difficult trade-off between management of IT in general but also of IT security, and security itself. To manage IT, from network to servers to services, and IT security it is definitively more effective to be able to do it from a central point, adopting a single strategy to manage and control everything in the same way and at the same time (the “holistic” approach). This means to have a single/central console/point to manage and control all of our IT systems and services, a single point in which to authenticate all users (eg. Federated Single Sign-On) etc. This approach is becoming more and more a requirement as we are moving  towards a service-based IT where services can be anywhere in Internet, access requires a Zero Trust approach, and security is applied at a very granular level to all systems and services.

Doing this we can vastly improve the security of each single system or service, and gives the possibility to monitor each single access or transaction. But in doing so we concentrate in single points activities crucial in particular for security: What can happen to systems and services if the central management console is taken over? What can happen to systems and services if the central authentication service is infiltrated?   

Malware, Ransomware & co.

These days, apart from reading about major incidents like the SolarWinds one, I keep hearing directly from people I know about malware incidents, ransomware incidents and so on. My personal statistics have never been so high. I do not know if this is a personal unlucky end of an unlucky year or a very bad end of an already bad year. I wish to everybody a pandemic-free (in all senses) 2021!  

Thoughts on Blue/Red/Purple Teams and defending from Targeted Attacks

Defending against Targeted Attacks (and even more against Advanced Persistent Threats, APT) is difficult and usually quite expensive. 

We all know the basis of IT security, from cybersecurity awareness and training to anti-malware, firewall and network segmentation, hardening and patching, monitoring and vulnerability assessments / penetration tests (VA/PT),  third-party cybersecurity contract clauses, etc.

But this is not enough. We need also Single-Sign-On (SSO, or even Federated Authentication) and Multi-Factor-Authentication (MFA), Zero Trust architectures, tracing of all local, remote and mobile activities (networks and hosts), SIEM data collection/management and SOC analysis, a cybersecurity Incident Team and an Incident Response plan.

But to defend against Targeted Attacks we need to go another step further. We have designed and implemented all security measures we could think of, but are they enough? Did we forget something? For sure we are ready against an everyday malware attack, but a Targeted Attack which could take months to study us and be implemented?

To answer this question it seems that the only possibility is to think and act as the attacker and look at our IT environment from this point of view. It is here that Blue, Red and Purple teams enter into play as they play the roles of attackers and defenders on our IT environment to test our cybersecurity standing to its limits. They will find holes and access paths we did not think about or even believe possible, but also smarter ways to defend ourselves.

But … what about a Risk Based approach to Security?

In other words, how much is it going to cost us?

Can we afford it?

Finally, is it worth going “all out” or, by accepting some risks, we can continue to do what we have been doing all along in cyber/IT-security? And in this case, how do we evaluate these “Risks” we need to accept?

PS. The last is partly a rhetorical question on my side.