Phishing in the Clouds

Again on Phishing, this time with a new twist.

We all know and by now use complex SaaS Cloud services, like Microsoft’s Office 365, Google’s G Suite, Amazon services and so on. They are all very modular, meaning that there are multiple data storage services and multiple application services from which to choose and use. Often a user must authorise a Cloud App to access her/his own data, and the App can be also by an external provider (a “partner” of the service). The Authorisation is usually implemented with OAuth which, in a few words, is a secure delegation access protocol based on the exchange of cryptographic keys.

So what is the scam? Simple: you receive an email which looks like coming from [name your Cloud provider here] and asks you to authorise an App (which looks authentic) to access your data. You do not need to insert any password since you are already logged-in your Cloud service platform, but just to click on the button, and that’s it!

You have given access to all your Cloud data and services to a fraudster, who can get your data and act as you!

For more details read for example this article by ArsTechnica.

Always about Security Patching and Updates

These days I keep coming back to the “security patching and updates” issue. So I am going to add another couple of comments.

The first is about Ripple 20 (here the official link but the news is already wide spread) which carries an impressive number of “CVSS v3 base score 10.0” vulnerabilities. The question is again:

how can we secure all of these Million/Billion vulnerable devices since it seems very likely that security patching is not an option for most of them?

The second one is very hypothetical, that is in the “food for thought” class.

Assume, as some says, that in 2030 Quantum Computers will be powerful enough to break RSA and other asymmetrical cryptographic algorithms, and that at the same time (or just before) Post Quantum Cryptography will deliver us new secure algorithms to substitute RSA and friends. At first sight all looks ok: we will have just to do a lot of security patching/updating of servers, clients, applications, CA certificates, credit cards (hardware), telephone SIMs (hardware), security keys (hardware), Hardware Security Modules (HSM) and so on and on… But what about all those micro/embedded/IoT devices in which the current cryptographic algorithms are baked into? And all of those large devices (like aircrafts but also cars) which have been designed with cryptographic algorithms baked into them (no change possible)? We will probably have to choose between living dangerously or buy a new one. Or we could be forced to buy a new one, if the device will not be able to work anymore since its old algorithm will not be accepted by the rest of the world.

PS. Concerning Quantum Computers,  as far as I know nobody claims that a full Quantum Computer will be functioning by 2030, this is only the earliest possible estimate of arrival, but it could take much much longer, or even never!

PS. I deliberately do not want to consider the scenario in which full Quantum Computers are available and Post Quantum Cryptography is not.

Patching, Updating (again) and CA certificates

Well, it is not the first time I write a comment about the need of security patching and updating all software, or the problem of not doing so. This is even more problematic for IoT and embedded software.

I just read this article on The Register which describes a real trouble which starts to show itself. In a few words, it is approximately 20 years that we massively use CA certificates to establish encrypted (SSL/TLS) connections. Clients software authenticates servers by trusting server certificates signed by a list of known Certification Authorities. Each client software has installed, usually in libraries, the public root certificates of the CA.

Now what happens if the root certificate of a CA expires?

This is not a problem for servers, their certificates are renewed periodically and more recent server certificates are signed by the new root CA.

Clients have some time, typically a few years, to acquire the new CA root certificates, this because root certificates last many years, usually 20. But what if  IoT or embedded devices never get an update? Think about cameras, smart-televisions, refrigerators, light bulbs and any other kind of gadget which connects to an Internet service. As the old root CA certificate expires, they cannot connect to the server and they can just stop working. The only possible way out is to manually update the software, if a software update is available and if an update procedure is possible. Otherwise, it remains only to buy a new device!

Security and Cryptography do not always go Hand in Hand with Availability

I have been reading this article by Ars Technica on smartphones’ 2FA Authenticator Apps and put it together with some thoughts about Hardware Security Keys (implementing the U2F [CTAP1] standard by the FIDO Alliance). In both scenarios, the security of the Second Factor Authentication (2FA) is based on the confidentiality of a cryptographic private key in the first case hold securely in the smartphone and in the second case in the USB hardware key. In other words, the private key cannot be read, copied or transferred to any other device, as it is also for any smart card (or integrated circuit card, ICC) or any Hardware Security Module (HSM).

Good for Security, but what about Availability?

Consider this very simple (and secure) scenario: my smartphone with a 2FA Authenticator App (or alternatively my U2F USB key) utterly breaks down: I have no way, at least given usual economical and time constraints, to recover it.

Now what about access to all those services which I set up in a very secure way with 2FA?

There is no access and no way immediately available to recover it! (But see below for more complete scenarios.)

The only possibility is to get a new hardware and perform a usually manual and lengthy process to regain access to each account and to configure the new hardware 2FA.

So Availability is lost and it could take quite some time and efforts to regain it.

The article by Ars Technica mentioned above, describes how some 2FA Authenticator Apps provide a solution to this problem: the idea is to install the App on multiple devices and securely copy the secret private key on each one of them. But now the secret private key, even if encrypted itself, is not anymore “hardware protected” since it is possible to copy it and transfer it. So Security/Confidentiality goes down but Availability goes up.

This process is obviously impossible as such with U2F hardware keys. In this case an alternative is to have multiple U2F hardware keys registered with all accounts and services so that if one key breaks one can always use a backup key to access the accounts and services. (For more info, see for example this support page by Google.)

On the Latest Bluetooth Impersonation Attack

Details on a new attack on Bluetooth have just been released (see here for its website). From what I understand it is based on two weaknesses of the protocol itself.

A quick description seems to be the following (correct me if I have misunderstood something).

When two Bluetooth devices (Alice and Bob) pair, they establish a common secret key mutually authenticating each other. The secret common key is kept by both Alice and Bob to authenticate each other in all future connections. Up to here all is ok.

Now it is important to notice the following points when Alice and Bob establish a new connection after pairing:

  • the connection can be established using a “Legacy Secure Connection” (LSC, less secure) or a “Secure Connection” (SC, secure), and either Alice or Bob can request to use LSC;
  • one of the devices acts as Master and the other as Slave, a connection can be closed and restarted and either Alice or Bob can request to act as Master;
  • in a “Legacy Secure Connection” the Slave must prove to the Master that it has the common secret key, but it is not requested that the Master proves to the Slave that it also has the common secret key (Authentication weakness);
  • in a “Secure Connection” either Alice or Bob can close the connection and restart it as a “Legacy Secure Connection” (Downgrade weakness).

Now Charlie wants to intercept the Bluetooth connection between Alice and Bob: first he listens to their connection and learns their Bluetooth addresses (which are public). Then Charlie jams the connection between Alice and Bob and connects as a Master to Alice using LSC and Bob’s Bluetooth address, and connects as a Master to Bob using LSC and Alice’s Bluetooth address. Since Charlie is Master both with respect to Alice and to Bob and since he can always downgrade the connection to LSC, he does not have to prove to neither Alice or Bob that he knows their common secret key. In this way Charlie is able to perform a MitM attack on the Bluetooth connection between Alice and Bob (obviously this description is very high level, I sketched just an idea of what is going on).

The bad point about this is that it is a weakness of the protocol, so all existing Bluetooth implementations are subject to it. The good point is that the fix should not be too difficult, except for the fact that many (too many) devices cannot be patched! Fortunately this attack seems not to apply to Bluetooth LE, but still I expect that most Bluetooth devices subject to this attack will never be patched.

But we should also consider the real impact of this attack: to perform it, the attacker Charlie should be physically near enough to the two devices (Alice and Bob) with a dedicate hardware (even if not so expensive), so this limits the possible implementations. Moreover this attack can have important consequences if Bluetooth is used to transfer very sensitive information or for critical applications (for example in the health sector). In other words, I would not worry when using Bluetooth to listen to music from my smartphone but I would worry if Bluetooth is used in some mission critical applications.

Whats’s happening in Cyber/IT Security?

Comparing current reported cyber/IT security threats, attacks and incidents to what happened a few years ago, it seems to me that  something has surely changed (I must warn that these conclusions are not based on statistics but on reading everyday bulletins and news).

On one side, security surely has improved: vulnerabilities are reported and fixed, patches are applied (at least more often), security policies, standards and practices are making a difference. Still managing password and properly configuring systems and services exposed on Internet remain very difficult tasks too often performed without the required depth.

But security has improved, which also means that attackers have been moving to easier and more lucrative approaches which have to do mostly with the “human interface”. In other words: fraud.

The first example is ransomware, that is the attacker is able to access the victim system, copy vast amount of data, then encrypt it or remove it and finally ask a ransom not only to return the data but also to avoid making it public on Internet. Since everybody is getting better in making backups, here the important point is the “making it public on Internet” so that the ransom is asked more to prevent sensitive data to be published than to restore the systems.

The second example is Targeted Phishing attacks, Business Email Compromise and similar scams in which the attacker impersonate a well known or important person by writing emails, letters, making phone calls etc. to convince typically a clerk but in some cases also a manager, to send a large amount of money to the wrong bank account.

Neither of these two types of attacks is new, but now they are filling the news daily. Even if cyber/IT security can still improve tremendously, there have been and there are notable security improvements which makes it that attacks are aimed more often to the weakest link: the human user.

Perception of IT Security in Times of Crisis

What does happen to IT Security in times of crisis? The change of behaviour and rush to provide new services can be detrimental to IT Security. We are witnessing it these days. From a broad perspective, there are at least two major sides to it: IT service providers and (new) users.

In case of an crisis like COVID-19 which we are just now experiencing, IT service providers are requested to provide new services in extremely short time and to improve the already existing ones. But in IT, rush does not usually go well with security. Take as an example what has just happened in Italy: the government has charged the state pension fund (INPS) to provide subsidies for COVID-19 to no less than 5 million people (INPS usually provides pension to more than 16 million people in Italy). Obviously due to the full lock-down of the country, the procedure has to be done online. So the new IT services to request and manage the subsidies went online, and according to the news, an epic IT security failure followed: availability and in part integrity and confidentiality were violated.

Is it possible to manage in IT at the same time extremely tight and aggressive schedules and security? I believe so, but only if security is embedded in everything we do in IT.

But I believe that IT security, at least for the IT as it is nowdays, requires also the users to be aware of it and to behave accordingly. Due to COVID-19, we have all been required or strongly adviced to move many activities to online services, from work to school, shopping etc. But the physical world has very different characteristic from the virtual Internet world.

For example, consider the case of a small local pro-bono association whose members used to meet in person periodically: access to the room is free, and there is freedom of speech and contribution to the association. Now think about moving these meetings to an audio/video conference in Internet, publicly announced, with free entrance and free access for all participants to audio, video, chat etc.: is this the same thing?

Definitely not.

The rules and behaviours which apply to a physical meeting, announced with paper leaflets distributed on the streets, of a small group of people in a physical room, surely do not apply to an audio/video/chat conference in Internet. What can happen if you do? Instead of the usual 20 participants, hundreds or thousands of people could show up and some could abuse the free access to audio/video/chat etc. to whichever purpose, including drugs, malware, pornography etc.

Is this a failure of the IT technology, of the service provider or of the lack of security awareness of the (new) user?

How long will it take to humanity to really comprehend the difference between the physical and the virtual world?

Hacking Satellites

Not a feat for everybody, but hacking satellites either connecting directly to them or by intrusion on the ground computers that manage them, could have dire consequences: from shutting them down, to burning them in space, spiralling them to ground or turning them into ballistic weapons.

Even if news have not been really confirmed and details are sketchy, it seems that some incidents already happened, starting from 1998, see the ROSAT satellite history, and more recent events as described here, here, here and here for a recent review.

Independently from the confirmation of the incidents, controlling by remote satellites, in particular small ones built also with off-the-shelves / commodity components, coupled with the difficulty (if not impossibility) of applying security patches, can make their “Cybersecurity” risks quite relevant, and effective counter-measures quite difficult. On the other side, due to the costs of building and sending a satellite in space, it is likely that these “Cybersecurity” risks are considered and effectively managed in the planning and developing phases of a satellite life-cycle, or at least so we hope.

CacheOut: another speculative execution attack to CPUs

It was 2 years ago that we learned about Spectre and Meltdown, the first speculative attacks to CPUs which exploit hardware “bugs” and in particular the speculative and out-of-order execution features in modern CPUs. In the last 2 years a long list of attacks followed these initial two, and CacheOut is the last one.

CacheOut, here its own website and here an article about it, builds and improves over previous attacks and countermeasures like microcode updates provided by the CPUs makers, and Operating System patches. The CacheOut attack allows, in Intel CPUs, to read data from other processes including secret encryption keys, data from other Virtual Machines and the contents of the Intel’s secured SGX enclave.

Besides the effective consequences of this attack and the availability and effectiveness of  software countermeasures, it is important to remember that the only final solution to this class of attacks is the development and adoption of new and redesigned hardware CPUs. This will probably take still a few years and in the meantime we should adopt countermeasures based on risks’ evaluation so to isolate critical data and processes to dedicated CPUs or entire computers.