Always about Security Patching and Updates

These days I keep coming back to the “security patching and updates” issue. So I am going to add another couple of comments.

The first is about Ripple 20 (here the official link but the news is already wide spread) which carries an impressive number of “CVSS v3 base score 10.0” vulnerabilities. The question is again:

how can we secure all of these Million/Billion vulnerable devices since it seems very likely that security patching is not an option for most of them?

The second one is very hypothetical, that is in the “food for thought” class.

Assume, as some says, that in 2030 Quantum Computers will be powerful enough to break RSA and other asymmetrical cryptographic algorithms, and that at the same time (or just before) Post Quantum Cryptography will deliver us new secure algorithms to substitute RSA and friends. At first sight all looks ok: we will have just to do a lot of security patching/updating of servers, clients, applications, CA certificates, credit cards (hardware), telephone SIMs (hardware), security keys (hardware), Hardware Security Modules (HSM) and so on and on… But what about all those micro/embedded/IoT devices in which the current cryptographic algorithms are baked into? And all of those large devices (like aircrafts but also cars) which have been designed with cryptographic algorithms baked into them (no change possible)? We will probably have to choose between living dangerously or buy a new one. Or we could be forced to buy a new one, if the device will not be able to work anymore since its old algorithm will not be accepted by the rest of the world.

PS. Concerning Quantum Computers,  as far as I know nobody claims that a full Quantum Computer will be functioning by 2030, this is only the earliest possible estimate of arrival, but it could take much much longer, or even never!

PS. I deliberately do not want to consider the scenario in which full Quantum Computers are available and Post Quantum Cryptography is not.

Patching, Updating (again) and CA certificates

Well, it is not the first time I write a comment about the need of security patching and updating all software, or the problem of not doing so. This is even more problematic for IoT and embedded software.

I just read this article on The Register which describes a real trouble which starts to show itself. In a few words, it is approximately 20 years that we massively use CA certificates to establish encrypted (SSL/TLS) connections. Clients software authenticates servers by trusting server certificates signed by a list of known Certification Authorities. Each client software has installed, usually in libraries, the public root certificates of the CA.

Now what happens if the root certificate of a CA expires?

This is not a problem for servers, their certificates are renewed periodically and more recent server certificates are signed by the new root CA.

Clients have some time, typically a few years, to acquire the new CA root certificates, this because root certificates last many years, usually 20. But what if  IoT or embedded devices never get an update? Think about cameras, smart-televisions, refrigerators, light bulbs and any other kind of gadget which connects to an Internet service. As the old root CA certificate expires, they cannot connect to the server and they can just stop working. The only possible way out is to manually update the software, if a software update is available and if an update procedure is possible. Otherwise, it remains only to buy a new device!

On the Latest Bluetooth Impersonation Attack

Details on a new attack on Bluetooth have just been released (see here for its website). From what I understand it is based on two weaknesses of the protocol itself.

A quick description seems to be the following (correct me if I have misunderstood something).

When two Bluetooth devices (Alice and Bob) pair, they establish a common secret key mutually authenticating each other. The secret common key is kept by both Alice and Bob to authenticate each other in all future connections. Up to here all is ok.

Now it is important to notice the following points when Alice and Bob establish a new connection after pairing:

  • the connection can be established using a “Legacy Secure Connection” (LSC, less secure) or a “Secure Connection” (SC, secure), and either Alice or Bob can request to use LSC;
  • one of the devices acts as Master and the other as Slave, a connection can be closed and restarted and either Alice or Bob can request to act as Master;
  • in a “Legacy Secure Connection” the Slave must prove to the Master that it has the common secret key, but it is not requested that the Master proves to the Slave that it also has the common secret key (Authentication weakness);
  • in a “Secure Connection” either Alice or Bob can close the connection and restart it as a “Legacy Secure Connection” (Downgrade weakness).

Now Charlie wants to intercept the Bluetooth connection between Alice and Bob: first he listens to their connection and learns their Bluetooth addresses (which are public). Then Charlie jams the connection between Alice and Bob and connects as a Master to Alice using LSC and Bob’s Bluetooth address, and connects as a Master to Bob using LSC and Alice’s Bluetooth address. Since Charlie is Master both with respect to Alice and to Bob and since he can always downgrade the connection to LSC, he does not have to prove to neither Alice or Bob that he knows their common secret key. In this way Charlie is able to perform a MitM attack on the Bluetooth connection between Alice and Bob (obviously this description is very high level, I sketched just an idea of what is going on).

The bad point about this is that it is a weakness of the protocol, so all existing Bluetooth implementations are subject to it. The good point is that the fix should not be too difficult, except for the fact that many (too many) devices cannot be patched! Fortunately this attack seems not to apply to Bluetooth LE, but still I expect that most Bluetooth devices subject to this attack will never be patched.

But we should also consider the real impact of this attack: to perform it, the attacker Charlie should be physically near enough to the two devices (Alice and Bob) with a dedicate hardware (even if not so expensive), so this limits the possible implementations. Moreover this attack can have important consequences if Bluetooth is used to transfer very sensitive information or for critical applications (for example in the health sector). In other words, I would not worry when using Bluetooth to listen to music from my smartphone but I would worry if Bluetooth is used in some mission critical applications.

Another nail in the coffin of SHA1

This recent paper “From Collisions to Chosen-Prefix Collisions – Application to Full SHA-1” by G. Leurent and T. Peyrin puts another nail in the coffin of SHA1. The authors present a chosen-prefix collisions attack to SHA1 which allows client impersonation in TLS 1.2 and peer impersonation in IKEv2 with an expected cost between 1.2 and 7 Million US$. The authors expect that soon it will be possible to bring down the cost to 100.000 US$ per collision.

For what concerns CA certificates, the attack allows, at the same cost, to create a rogue CA and fake certificates in which the signature includes the use of SHA1 hash, but only if the true CA does not randomize the serial number field in the certificates.

It is almost 15 years that it is known that SHA1 is not secure: NIST deprecated it in 2011, it should not have been used from 2013 and substituted with SHA2 or SHA3. By 2017 all browsers should have removed support for SHA1, but the problem is always with legacy applications that still use it: how many of them are still out there?

Patching timing is getting tight

The US Cybersecurity and Infrastructure Security Agency (CISA) has recently reviewed the guidelines (actually a Binding Operational Directive for US Federal Agencies) on patching (see the Directive here and a comment here).

Now vulnerabilities rated “critical” (according to CVSS version 2) must be remediated within 15 days (previously it was 30 days) and “high” vulnerabilities within 30 days from the date of initial detection by CISA weekly scanning.

Due to the short time between detection and remediation of vulnerabilities, applying patches in time is going to be difficult, due to the possibily missing availability of patches by the vendors and the time needed anyway to test them in each IT environment.

This implies that there must be in place processes to design, test and deploy temporary countermeasures to minimize to an acceptable level the risks due to these vulnerabilities. And these processes must be fast, they should take at most a few days.

More Thougths on Maintenance, Updates and Fixing

The issue of maintenance, updates and software fixing actually deserves a few more considerations.

Since a long time, security experts have been saying that software updates are essential for the security of operating systems and applications. A long time ago instead, software updates were considered only when new features or features’ updates were needed. It took a long time but by now, at least for mainstream applications, periodic (eg. daily, weekly, monthly etc.) software updates, in particular for security issues, are standard.

Still there are many unresolved issues. First of all there are still systems which are not updated regularly, among them there are some “mission critical” ICT systems considered so critical that updates are not applied for fear of breaking the system or interrupting the service, and, at the opposite end, some consumer embedded systems considered to be of so little value that a security update feature has been dropped altogether. (In my previous post I briefly mentioned these two classes of systems.)

But there are two other aspects that must be considered. The first one is the time that it takes between the release of the security patch or update by the software vendor or maintainer, and its installation on the system. As in the case of both Bash Shellshock and OpenSSL Heartbleed, patches and updates have been released out of the normal schedule, and they should have been applied “immediately”. To apply “immediately” a patch released out of the normal schedule, one should at least have a way:

  1. to know “immediately” that the patch has been released
  2. to obtain and apply “immediately” the patch to the systems.

Obviously to do this one needs to have some established emergency security procedure in place. One cannot rely on someone reading/hearing the news somewhere and deciding by herself to act as a consequence.

The second aspect, again also relevant in the two incidents mentioned before, is what to do in the time between the announcement of the existence of the bug and the availability of the patch, what is called a 0-day vulnerability. Again there must be some well established emergency security procedure which indicates what to do. The minimum is to assess what is the security risk of running the systems un-patched, to investigate if there are other countermeasures (like signatures for IPS or other network filters, detailed monitoring etc.), and in the extreme case just to turn off the service until the patch is available.

It seems easy, but it is not yet easy to put in practice.

Why the Bash ShellShock bug is so threatening ?

This year we had already at least two bugs which someone claims “have threatened to take down Internet”: OpenSSL Heartbleed and Bash ShellShock.

I will probably be unconventional, but by now I believe that, in particular for the Bash ShellShock bug, a much bigger security problem lies elsewhere.

Indeed my personal experience with the Bash ShellShock has been that most affected systems have been fixed overnight by the next automatic scheduled system update. Some critical systems have been manually updated in a matter of a couple of minutes without any process interruption. So all systems have been fixed as a matter of usual maintenance of ICT systems.

So where is the problem? Why so much hysteria about it? How is it possible that even very large and well-know systems have been attacked exploiting this bug?

Since it seems that as of today there are still many (too many) systems vulnerable to both the OpenSSl Heartbleed and Bash ShellShock, we have to conclude that the problem does not lie with the availability of the fix but with the maintenance procedure of the systems themself.

It is then not so much a problem of the bug itself, but of those managing or designing the systems which adopt this software, and any software at this point.

I can see two major areas where there can be maintenance problems:

  • large systems which are built with the approach “if it is not broken, do not fix it”: often in this case maintenance, even security patches, is not applied, update and patching procedures are complex and executed rarely; of course, if the system has not been designed or configured with the possibility of being easily updated or patched, any vulnerability in any software component can become a real big problem
  • similarly but even worse, are embedded systems which have been designed with the “build it once and forget about it” approach: in this case just the idea of maintenance has not been considered, or it exists only in the form of re-installing the full software (but only if you know how to do it and if a fixed new version of the full software has been made available).

It seems to me that more than blaming those writing and maintaining these software components, we should first think about how to maintain the software on our systems, which are the procedures for normal updates and security updates, etc.