NTIA Request for Comment on IoT Policies

The National Telecommunications and Information Administration (NTIA) of the US Department of Commerce’s Internet Policy Task Force, has announced a Request for Comment on the key issues regarding the deployment of Internet of Things.

This is one of the first steps towards creating some policies and / or regulations on IoT devices, and can be a very good occasion for stating clearly some security baselines.

GSMA and Security of IoT

GSMA just announced the availability of the “GSMA IoT Security Guidelines”. Potentially this could have quite a good impact on the security of IoTs. Even if GSMA speaks only for the mobile telecommunications industry, its importance in today communications market is undeniable. The idea behind it should be that companies and providers who plan to connect new IoT devices to the network, will follow these Security Guidelines to provide some level of security to the device communications, at least.

Let’s hope that this will be a first real step towards the IT security of IoTs, but first we need to read and understand these guidelines and then, in case, see if they are implemented and if their implementation will provide the expected benefits.

On the Privacy of Webcams and Security of IoTs

The article ‘“Internet of Things” security is hilariously broken and getting worse’ of ARS Technica shows how, using Shodan , one can find pictures from millions of open Webcams on internet.

The issue is not new but the scale of the problem is threatening. As the article nicely points out:

  • people do not care about the security or privacy features of the devices they buy
  • the important points are cost and easiness to manage (which means it is better if there are no password to access it)
  • only to throw away the device the day they find themselves on Shodan or in a picture on a newspaper and say “never again”.

But who is going to do something about it? Who should defend the privacy of people and the security of Internet? Should the IoT market be regulated or self-regulated or something in between?

Writing Software and Security Bugs

Writing software is really hard: not only it is quite difficult to implement the functionalities that customers and final users desire and sometimes require, but it is also extremely difficult to write bug-free software, free from both functionality bugs and security bugs. (And it is not always easy to understand if there is a difference and what is the difference between functionality and security bugs.)

Unfortunately, except that for software developers (and not even for all of them), the fact that writing software is quite hard comes as a surprise or it is just plainly impossible to accept. How much harder could be building an engine than writing the software to pilot an airplane? (Consider moreover that of today most of the work of building an engine is done by software.)

Here I collected a random selection of recent news from The Register in different ways relevant to this subject:

One of the first examples of IoT and security risks

Among IT practitioners there are a lot of ideas and discussions on the “Internet of Things” (IoT) and the security risks associated to them.

If IoT has many positive and useful future developments, the security aspects are very difficult to manage to the point of posing a very big question mark on the idea itself of IoT.

One example is described in the research “House of Keys: Industry-Wide HTTPS Certificate and SSH Key Reuse Endangers Millions of Devices Worldwide” published by SEC Consult, which shows how many hosts, typically home and SOHO routers for internet access, use the same cryptographic keys, which are public and well know, so that anyone can impersonate them and anyone who can intercept their traffic can decrypt it.

Even if the impacts of this vulnerability are probably not very high, it seems extremely difficult to fix, since the new devices will be fixed but the millions already in use will probably never be fixed and will remain active for a few more years.

Even more worrisome is that these are IT devices developed, built and sold by IT companies that should known about IT and IT security. What will happen when billions of devices will be connected to internet (the real IoT) developed, built and sold by non IT companies?

Cryptography is too risky: should we use something else to secure IT systems?

Obviously the title of this post is provocative, but reading some recent news it is evident that us, IT professionals and IT industry, are not good in managing cryptography. The consequence is that we deploy cryptography in IT products and give a false sense of security to the users. This actually can have worse consequences than if we would not use cryptography at all. I will give just a couple of examples.

This research paper shows how a well-known brand of hard disks has implemented disk encryption in totally faulty ways, to the point that for some disk models hardly any security is provided by the built-in disk encryption functionalities. This is just another of many similar cases, where cryptographic protocols and algorithms are incorrectly implemented so to cancel all or most of the security that they should provide.

Another research paper shows how a well-funded agency or corporation can in practice break the encryption of any data encrypted with the Diffie-Hellmann (DH) key exchange algorithm using keys up to 1024 bits included. Should we be shocked by this news? Not really since already 10 years ago it was known that a key of 1024 bits is too short for DH. Indeed, as per RFC 7525, a 1024 bit DH key offers a security less than a conventional bit security of 80 bits, but again RFC 7525 states that the absolute (legacy) minimum required conventional bit security must be 112 bits, and the current minimum required conventional bit security is 128 bits, that would practically correspond to a 2048 bits DH key. Even if we, IT professionals and IT industry, have known for at least 10 years that 1024 bits DH keys are too short to offer security to the data that they should protect, as of today a too large number of HTTPS websites, VPNs and SSH servers use DH keys of 1024 bits or less (see again the research paper mentioned above).

Unfortunately these are not two isolated examples, recent news are full of similar facts. So I start to wonder if we are good enough to manage cryptography or if we should look into something else to protect IT systems.

IT Security and Cars (and the IoT world)

This news is a very good example of how IT security is generally perceived, almost like an annoying add-on or an after-thought, in any case better to think about it later …

Security should be one of the pillar of any IT product and service but very seldom it is.

It will be very interesting to see how Security and IoT (Internet of Things) will go together, the first glimpses are not promising even if many people warn of the possible disasters ahead.

More Thougths on Maintenance, Updates and Fixing

The issue of maintenance, updates and software fixing actually deserves a few more considerations.

Since a long time, security experts have been saying that software updates are essential for the security of operating systems and applications. A long time ago instead, software updates were considered only when new features or features’ updates were needed. It took a long time but by now, at least for mainstream applications, periodic (eg. daily, weekly, monthly etc.) software updates, in particular for security issues, are standard.

Still there are many unresolved issues. First of all there are still systems which are not updated regularly, among them there are some “mission critical” ICT systems considered so critical that updates are not applied for fear of breaking the system or interrupting the service, and, at the opposite end, some consumer embedded systems considered to be of so little value that a security update feature has been dropped altogether. (In my previous post I briefly mentioned these two classes of systems.)

But there are two other aspects that must be considered. The first one is the time that it takes between the release of the security patch or update by the software vendor or maintainer, and its installation on the system. As in the case of both Bash Shellshock and OpenSSL Heartbleed, patches and updates have been released out of the normal schedule, and they should have been applied “immediately”. To apply “immediately” a patch released out of the normal schedule, one should at least have a way:

  1. to know “immediately” that the patch has been released
  2. to obtain and apply “immediately” the patch to the systems.

Obviously to do this one needs to have some established emergency security procedure in place. One cannot rely on someone reading/hearing the news somewhere and deciding by herself to act as a consequence.

The second aspect, again also relevant in the two incidents mentioned before, is what to do in the time between the announcement of the existence of the bug and the availability of the patch, what is called a 0-day vulnerability. Again there must be some well established emergency security procedure which indicates what to do. The minimum is to assess what is the security risk of running the systems un-patched, to investigate if there are other countermeasures (like signatures for IPS or other network filters, detailed monitoring etc.), and in the extreme case just to turn off the service until the patch is available.

It seems easy, but it is not yet easy to put in practice.

Why the Bash ShellShock bug is so threatening ?

This year we had already at least two bugs which someone claims “have threatened to take down Internet”: OpenSSL Heartbleed and Bash ShellShock.

I will probably be unconventional, but by now I believe that, in particular for the Bash ShellShock bug, a much bigger security problem lies elsewhere.

Indeed my personal experience with the Bash ShellShock has been that most affected systems have been fixed overnight by the next automatic scheduled system update. Some critical systems have been manually updated in a matter of a couple of minutes without any process interruption. So all systems have been fixed as a matter of usual maintenance of ICT systems.

So where is the problem? Why so much hysteria about it? How is it possible that even very large and well-know systems have been attacked exploiting this bug?

Since it seems that as of today there are still many (too many) systems vulnerable to both the OpenSSl Heartbleed and Bash ShellShock, we have to conclude that the problem does not lie with the availability of the fix but with the maintenance procedure of the systems themself.

It is then not so much a problem of the bug itself, but of those managing or designing the systems which adopt this software, and any software at this point.

I can see two major areas where there can be maintenance problems:

  • large systems which are built with the approach “if it is not broken, do not fix it”: often in this case maintenance, even security patches, is not applied, update and patching procedures are complex and executed rarely; of course, if the system has not been designed or configured with the possibility of being easily updated or patched, any vulnerability in any software component can become a real big problem
  • similarly but even worse, are embedded systems which have been designed with the “build it once and forget about it” approach: in this case just the idea of maintenance has not been considered, or it exists only in the form of re-installing the full software (but only if you know how to do it and if a fixed new version of the full software has been made available).

It seems to me that more than blaming those writing and maintaining these software components, we should first think about how to maintain the software on our systems, which are the procedures for normal updates and security updates, etc.

Game Over Zeus and Banking Malware

This announcement by US-CERT made me think about the current status of the war (I think that at the moment this is actually the correct word) between attackers / thieves / fraudsters and ICT Security practitioners, Banks, FInancial Institutes etc.

Recently we have seen banking malware using Tor hidden services to hide C2C (Command-and-Control) servers, or as described in the US-CERT announcement, P2P (peer-to-peer) networks. The purpose is the same, to hide the controlling master of the malware, that is the attacker / thief / fraudster her/himself. This also means that recently security practitioners, law enforcement and bank personnel got very good in finding and at least disrupting the C2C servers, otherwise there would be no need to find new ways of hiding them.

But how is this war going, that is, who is winning? Let’s be clear, we, the good guys, are losing.

At first sight the reason for this is simple: there are just too many bugs in today’s software (and possibly in hardware, or at least in embedded software in hardware) and new bugs are added at such a rate that our efforts to ‘secure’ the software are improving the situation a little but not much. It is just a never-ending chase: find a bug, exploit the bug, fix the bug – repeat… It is true that bugs are getting more difficult to find, that software developers are getting better in writing software and fixing bugs, that Bugs-Bounties are awarded to bugs discoverers from software houses etc., but the same happens on the other side and a real market of exploits (to which even secret services and the like participate) of unknown (also called 0-day) bugs exists and flourishes.

In this situation the approach that it is often adopted to protect financial transactions online (web-based) is to balance the costs of defensive measures with the losses to attackers. In the losses one should consider both those direct and those indirect, like bad publicity and loss of customers. Investing too much in some defensive measures could work but could also be a waste of money since the next attack can just avoid the expensive defensive measure and exploit some other bugs or flow in the process or, even worse, human weakness.

This really looks like a never ending cat-and-mouse game.