GSMA and Security of IoT

GSMA just announced the availability of the “GSMA IoT Security Guidelines”. Potentially this could have quite a good impact on the security of IoTs. Even if GSMA speaks only for the mobile telecommunications industry, its importance in today communications market is undeniable. The idea behind it should be that companies and providers who plan to connect new IoT devices to the network, will follow these Security Guidelines to provide some level of security to the device communications, at least.

Let’s hope that this will be a first real step towards the IT security of IoTs, but first we need to read and understand these guidelines and then, in case, see if they are implemented and if their implementation will provide the expected benefits.

On the Privacy of Webcams and Security of IoTs

The article ‘“Internet of Things” security is hilariously broken and getting worse’ of ARS Technica shows how, using Shodan , one can find pictures from millions of open Webcams on internet.

The issue is not new but the scale of the problem is threatening. As the article nicely points out:

  • people do not care about the security or privacy features of the devices they buy
  • the important points are cost and easiness to manage (which means it is better if there are no password to access it)
  • only to throw away the device the day they find themselves on Shodan or in a picture on a newspaper and say “never again”.

But who is going to do something about it? Who should defend the privacy of people and the security of Internet? Should the IoT market be regulated or self-regulated or something in between?

Marketing and Internet Surveillance

The blog post “The Internet of Things that Talk About You Behind Your Back” by Bruce Schneier is really creepy. But it isn’t new, it is just getting worse.

In IT Security, the problem of undetected communication covert channels is old and well known. Also the fact that internet marketing adopts approaches and technologies that some times are close to it, is well known.

What it is worrisome is the extent to which we are getting. There are various aspects to it.

One is the legal aspect, that is what the legislations allow and how much they protect citicizens from excesses: it would be interesting to compare current legislations between different countries, from the USA to EU, Canada, Brazil, Russia, India, China, Japan etc.

On the technical side, devices like PCs and some tablets allow the user some choices like use different browsers (even Tor), manage cookies (in particular 3rd party cookies) etc., even if it is usually difficult to really be anonymous on internet unless extra precautions are taken (and many users will not be able to adopt similar precautions).

On smaller devices, like smartphones and “smart” objects like watches etc., choices are much more limited but with a little bit of effort the user can do something to protect him/herself from this kind of surveillance.

On IoT devices at the moment there seems to be nothing that the user can do, it is either use it and be traced, or do not use / buy it at all. For these devices, legislation could be the only way of giving the user some choices.

Finally, how many users are even aware of this kind of Internet Surveillance? How many would object if they knew?

IT Security, Human Behaviour and Normalization of Deviance

Bruce Schneier has a quite interesting blog posting (read here) on “Normalization of Deviance”, that is the human behaviour for which errors, warnings and the violation of rules or acceptable actions, becomes the norm.

We all know that in IT Security, people are usually the weakest link. We should also be careful that IT security professionals do not fall into the “Normalization of Deviance” syndrome. I try to summarize it in the extreme as follows: the approach that if something bad has happened, like an intrusion in an IT system, but it did not have real consequences and did not cause real damage, then such kind of events can be ignored from now on.

This is a pretty dangerous human behaviour, but unfortunately, as discussed by Schneier and the sociologists who study this field, quite common.

Writing Software and Security Bugs

Writing software is really hard: not only it is quite difficult to implement the functionalities that customers and final users desire and sometimes require, but it is also extremely difficult to write bug-free software, free from both functionality bugs and security bugs. (And it is not always easy to understand if there is a difference and what is the difference between functionality and security bugs.)

Unfortunately, except that for software developers (and not even for all of them), the fact that writing software is quite hard comes as a surprise or it is just plainly impossible to accept. How much harder could be building an engine than writing the software to pilot an airplane? (Consider moreover that of today most of the work of building an engine is done by software.)

Here I collected a random selection of recent news from The Register in different ways relevant to this subject:

One of the first examples of IoT and security risks

Among IT practitioners there are a lot of ideas and discussions on the “Internet of Things” (IoT) and the security risks associated to them.

If IoT has many positive and useful future developments, the security aspects are very difficult to manage to the point of posing a very big question mark on the idea itself of IoT.

One example is described in the research “House of Keys: Industry-Wide HTTPS Certificate and SSH Key Reuse Endangers Millions of Devices Worldwide” published by SEC Consult, which shows how many hosts, typically home and SOHO routers for internet access, use the same cryptographic keys, which are public and well know, so that anyone can impersonate them and anyone who can intercept their traffic can decrypt it.

Even if the impacts of this vulnerability are probably not very high, it seems extremely difficult to fix, since the new devices will be fixed but the millions already in use will probably never be fixed and will remain active for a few more years.

Even more worrisome is that these are IT devices developed, built and sold by IT companies that should known about IT and IT security. What will happen when billions of devices will be connected to internet (the real IoT) developed, built and sold by non IT companies?

Homomorphic encryption and trusting the Clouds

Homomorphic encryption is an old idea but only in 2009 and the work of Gentry started to have some possible practical applications. Since then there have been quite impressive improvements in the research in this field of cryptography, also due to the need to improve the security of data managed by Cloud systems.

In brief, homomorphic algorithms are cryptographic algorithms that allow to do computations, like sums, multiplications, searches etc., on encrypted data giving encrypted results, without knowing the encryption key.

It should be obvious that these algorithms would be very useful for Clouds’ applications since the owner of the data would be able to use the data remotely by keeping at the same time the data, and the result of the computations, always encrypted in the Cloud application.

Unfortunately homomorphic encryption is not ready yet for general use, but it has just appeared an interesting research paper by Microsoft Reasearch announcing the release of a SEAL (Simple Encryption Arithmetic Library), a library for using homomorphic encryption in bioinformatics, genomic and other research areas.

Cryptography is too risky: should we use something else to secure IT systems?

Obviously the title of this post is provocative, but reading some recent news it is evident that us, IT professionals and IT industry, are not good in managing cryptography. The consequence is that we deploy cryptography in IT products and give a false sense of security to the users. This actually can have worse consequences than if we would not use cryptography at all. I will give just a couple of examples.

This research paper shows how a well-known brand of hard disks has implemented disk encryption in totally faulty ways, to the point that for some disk models hardly any security is provided by the built-in disk encryption functionalities. This is just another of many similar cases, where cryptographic protocols and algorithms are incorrectly implemented so to cancel all or most of the security that they should provide.

Another research paper shows how a well-funded agency or corporation can in practice break the encryption of any data encrypted with the Diffie-Hellmann (DH) key exchange algorithm using keys up to 1024 bits included. Should we be shocked by this news? Not really since already 10 years ago it was known that a key of 1024 bits is too short for DH. Indeed, as per RFC 7525, a 1024 bit DH key offers a security less than a conventional bit security of 80 bits, but again RFC 7525 states that the absolute (legacy) minimum required conventional bit security must be 112 bits, and the current minimum required conventional bit security is 128 bits, that would practically correspond to a 2048 bits DH key. Even if we, IT professionals and IT industry, have known for at least 10 years that 1024 bits DH keys are too short to offer security to the data that they should protect, as of today a too large number of HTTPS websites, VPNs and SSH servers use DH keys of 1024 bits or less (see again the research paper mentioned above).

Unfortunately these are not two isolated examples, recent news are full of similar facts. So I start to wonder if we are good enough to manage cryptography or if we should look into something else to protect IT systems.

The OPM hack and Biometric Authentication

For a long time, biometric authentication has been considered to be the safest and more secure way of identifying users and granting access to IT and non-IT services. It has just one serious drawback: you cannot change the biometric credentials, this is centainly “you”, and if your biometric credentials are stolen, someone could impersonate “you”.

This is what has happened in the OPM hack, the latest news reports that 5.6 million fingerprints of USA federal employees have been stolen, see wired for example. Information about this is scarce and it is not clear which is the format of the stolen fingerprints and how easy it could be to reproduce them. Security experts believe that it will be possible, sooner or later, to reproduce them, it is just a question of time, technology and money.

So what about the persons who have their fingerprints stolen and possibly reproduced by others? What about the security consequences for companies and the state?

How can we use the security of biometrics without the associated risk of impersonation?