On Evil Maid Attacks and Remote Working

I have just been reading this post by Dolos Group researchers (see here also for another report) on a physical attack to an unattended/stolen laptop which can go under the name of an “Evil Maid” attack. Besides the technical expertise required to perform the attack balanced by its simplicity and speed, the consequences can be dire: the attacker can fully access the company’s network as the owner of the laptop from her/his own device.

We very well know that IT remote working brings in quite a few extra risks from the security point of view, and the ones related to a physical attack are usually not the first or more worrisome. Managing the IT security of a standalone PC in a company’s office is quite different from managing the IT security of a remote working laptop. How can we provide and monitor the IT security of remote working devices? Which is the best approach, at least as of today? Is it better to strengthen the IT security of portable devices starting from their hardware and up, or give up completely on it and adopt virtual/cloud (and/or VDI) desktop solutions, both approaches coupled with zero-trust, multi-factor authentication, security monitoring etc.?

In any case, it seems clear that the IT security risks associated to remote working have not yet been really well appreciated, in particular coupled with the explosion of remote working due to the pandemic.

From DNS Attacks to Input Validation and (Zero) Trust

I have been reading this article on DNS attacks which also reminds us that if an attacker controls a DNS server or is able to poison a DNS cache, malware or at least unsafe data can be transmitted to a target application. DNS data is anyway data in input to an application, and as such it should be validated prior to any processing. Instead it can happen that it is considered as trusted data, in particular if DNSSEC is used. 

Unfortunately the situation is a little complex: with very few exceptions (as for example “DNS over HTTPS – DoH“), applications do not directly retrieve DNS data but make the Operating System do that for them. And any application trusts the Operating System on which is running, which should imply that data received from the Operating System should be trusted, or not? Which System Calls should an application trust and which it should not? Which kind of data validation should an application expect to be done by the Operating System on data passed through System Calls? The Devil and unfortunately the Security is in the details…

Which brings me to consider the concept of zero trust in this respect: first of all, in IT we always need to trust the security of something, at least of the Hardware (eg. the CPU), and unless we look at Confidential Computing (see here for example), then we need to trust also quite a lot of Software. 

So we are back to Input Validation, which can make applications bigger and slower and in some cases even more difficult to maintain, but accepting in input only data in the format and content that we expect, make for sure life much safer.

Apple Private Relay: Is TOR going mainstream?

Apple has announced (see here) a new iCloud premium feature called iCloud Private Relay:

when browsing with Safari, Private Relay ensures all traffic leaving a user’s device is encrypted, so no one between the user and the website they are visiting can access and read it, not even Apple or the user’s network provider. All the user’s requests are then sent through two separate internet relays. The first assigns the user an anonymous IP address that maps to their region but not their actual location. The second decrypts the web address they want to visit and forwards them to their destination. This separation of information protects the user’s privacy because no single entity can identify both who a user is and which sites they visit.

More information can be found for example here and here.

This seems very much a form of Onion Routing, which is all the theory and practice of TOR (The Onion Router, indeed). It will be very interesting to see how it will work out because there is the possibility of becoming a disruptive technology to improve the privacy and security of all of us when browsing the Internet.  

Passwordless Authentication

Recently I frequently met discussions about passwordless authentication: is this myth finally becoming reality? It is at least 20 years that we have been discussing and announcing the demise of passwords.

Passwords can be substituted by biometrics, but also hardware tokens (eg. security keys), smartphones etc. together with authenticator apps, single-sign-on, identity federation and so on.

Is this enough to get rid of passwords?

Well, passwords are very cheap to manage and very scalable, well known, used and abused, possible to forget but not to break down or to be physically lost or stolen. And most systems will still use passwords / PIN codes as backup.

Already today access to most personal devices (smartphones, tablets, portables etc.) is passwordless, usually by biometrics, with password as backup. But this is very local to each personal device and it seems difficult to scale it up to all systems and applications.

So where do we really stand on the way to “passwordlessness”? How and when will we get there?

More Side-Channel Attacks

Side channel attacks have always been there, but with Spectre and Meltdown we reached a new level of complexity, danger and pervasiveness. Somehow one of the main ingredients of this family of attacks is to measure the time (or time difference) it takes to process/compute some data and from this to infer information about the data itself. In the 3 years since the announcement of Spectre and Meltdown, a lot of research has been done on this area to find more hardware components suitable to time measurements which can be exploited, to improve the efficiency of the attacks (for which Machine Learning is of great help), and to understand if this type of attacks can become a real everyday threat for everyone. There are recent results in all directions: “Lord of the Ring(s): Side Channel Attacks on the CPU On-Chip Ring Interconnect Are Practical” exploits the contention on the CPU ring interconnect (see here and here for details and the research paper), whereas Google Security made further progress in implementing Spectre against the Chrome Browser using Javascript (here the blog announcement) and other researchers have discovered a way of performing a side channel attack in a Web Browser with Javascript completely disabled (see here and here for details and the research paper) which can be used as an alternative way of to tracking users online. In particular the latter result is worrisome: it seems that the possibility of being subject to this kind of attacks in everyday life, that is browsing in internet on websites, is getting closer.

Online Tracking and the End of Third Party Cookies

Third party cookies as those cookies that a browser sends to a different website that the one we are visiting. This functionality is used and too often abused for marketing purposes and to track users’ Web navigation. Indeed the latest announcements by browser makers indicate that in one year most browsers will not allow anymore any third party cookie. 

But there is a nice little trick to run around this restriction. Suppose one is visiting the company.com website. Cookies for the websites in the company.com domain and any subdomain like shop.company.com are not third party, whereas relatively to the company.com domain, cookies for websites in the tracker.com domain are third party. Now the idea is very simple: what if there is a domain tracker.company.com which points to the tracker.com website? Cookies for tracker.company.com are not third party, so they are allowed. And this can be done quite easily with the appropriate DNS configuration. The principal DNS record is an A (for Address) type record which maps a domain name to an IP address. But very common is also the CNAME (Canonical Name) record which maps a domain name to another domain name (that is an alias), like tracker.company.com mapped to tracker.com. So the browser sees tracker.company.com (first party), but the cookie ends up in tracker.com (third party). As simple as that. 

Third party cookies are mostly used for advertisement purposes: in practice in the website one puts some code from the advertisement company which displays advertisements, counts views and tracks viewers. With CNAME-cloaked tracking the owner of the website not only has to install the code for displaying the advertisements, but also to insert in the DNS the CNAME record to the advertisement website.

Third party cookies have not disappeared yet, and the competition on CNAME-cloaked tracking has already started: how can browsers and AD-blocker extensions block these disguised third party cookies? And how can advertisement companies can continue to track users’ Web navigation?

PS. A non technical comment: nothing is really for free in life, free resources on Internet must be payed by someone, and a free website is often payed by advertisements. The important point is to find the right balance between price and hidden costs (including our personal information).

Hardware Vulnerability of Some FIDO U2F Keys

Hardware Security Keys, like Google Titan Key or Yubico YubiKey, implementing the FIDO U2F protocol, provide what is consider possibly the most secure 2nd Factor Authentication (2FA) measure. Indeed the private key is protected in Hardware and should be impossible to copy, so that only the physical possession of the hardware token provides the authentication.

But a recent research (see here for the research paper, and here and here for some comments) shows that a class of chip (the NXP A700X) is vulnerable (CVE-2021-3011) to a physical hardware attack which allows to extract the private key from the chip itself, and then to clone the security key. To fully succeed, the attack requires to know the credential of the service(s) for which the security key works as 2FA and the physical availability of key itself for at least 10 hours. Then the security key is dismounted and the secret key is obtained by measuring the electromagnetic radiations emitted by the key during ECDSA signatures. Finally a new security key can be cloned with the stolen secret key.

From a theoretical point of view, this vulnerability violates the fundamental requirement of an hardware security key, that is that the private key cannot be extracted from the hardware in any way. But it should also be noted that the FIDO U2F protocol has some countermeasures which can be useful to mitigate this vulnerability, like the presence of a counter of the authentications done between a security key and a server so that a server can check if the security key is sending the correct next sequence number which would be different from the one provided by the cloned security key.

On practical terms, it all depends on the risks associated with the use of the security key and the possibility that someone will borrow your security key for at least 10 hours without anybody noticing it.  If this constitutes a real risk, then check if your security key is impacted by this vulnerability and in case it is, change it. Otherwise if this attack scenario is not a major threat, it should be reasonably safe to continue to use even vulnerable security keys for a little while, while keeping up to date with possible new developments or information from the security keys manufacturers. Even in this case, vulnerable security keys should anyway be changed as soon as convenient.

 

News on Fully Homomorphic Encryption

Fully homomorphic encryption (FHE) would drastically improve the security of sensitive computations and in general of using Cloud and third party resources, by allowing to perform computations directly on encrypted data without the need to know the decryption key. 

But as of today, fully homomorphic encryption is extremely inefficient making it impractical for general use. Still development is continuing and new results are achieved. For example recently IBM announced an Homomorphic Encryption Service, which is probably more like an online demo but the purpose seems to be to simplify the path for an early adoption by specially interested parties. But IBM is not alone and, among others, Google and Microsoft are developing Open Source libraries which can be used by a developer expert in cryptography to build for example end-to-end encrypted computation services where the users never need to share their keys with the service.

ATP Attacks and Single Point of Failure

We are all following the development of the “SolarWinds incident” but one comment comes to my mind (see also this Advisory from NSA).

There is a very difficult trade-off between management of IT in general but also of IT security, and security itself. To manage IT, from network to servers to services, and IT security it is definitively more effective to be able to do it from a central point, adopting a single strategy to manage and control everything in the same way and at the same time (the “holistic” approach). This means to have a single/central console/point to manage and control all of our IT systems and services, a single point in which to authenticate all users (eg. Federated Single Sign-On) etc. This approach is becoming more and more a requirement as we are moving  towards a service-based IT where services can be anywhere in Internet, access requires a Zero Trust approach, and security is applied at a very granular level to all systems and services.

Doing this we can vastly improve the security of each single system or service, and gives the possibility to monitor each single access or transaction. But in doing so we concentrate in single points activities crucial in particular for security: What can happen to systems and services if the central management console is taken over? What can happen to systems and services if the central authentication service is infiltrated?