Perception of IT Security in Times of Crisis

What does happen to IT Security in times of crisis? The change of behaviour and rush to provide new services can be detrimental to IT Security. We are witnessing it these days. From a broad perspective, there are at least two major sides to it: IT service providers and (new) users.

In case of an crisis like COVID-19 which we are just now experiencing, IT service providers are requested to provide new services in extremely short time and to improve the already existing ones. But in IT, rush does not usually go well with security. Take as an example what has just happened in Italy: the government has charged the state pension fund (INPS) to provide subsidies for COVID-19 to no less than 5 million people (INPS usually provides pension to more than 16 million people in Italy). Obviously due to the full lock-down of the country, the procedure has to be done online. So the new IT services to request and manage the subsidies went online, and according to the news, an epic IT security failure followed: availability and in part integrity and confidentiality were violated.

Is it possible to manage in IT at the same time extremely tight and aggressive schedules and security? I believe so, but only if security is embedded in everything we do in IT.

But I believe that IT security, at least for the IT as it is nowdays, requires also the users to be aware of it and to behave accordingly. Due to COVID-19, we have all been required or strongly adviced to move many activities to online services, from work to school, shopping etc. But the physical world has very different characteristic from the virtual Internet world.

For example, consider the case of a small local pro-bono association whose members used to meet in person periodically: access to the room is free, and there is freedom of speech and contribution to the association. Now think about moving these meetings to an audio/video conference in Internet, publicly announced, with free entrance and free access for all participants to audio, video, chat etc.: is this the same thing?

Definitely not.

The rules and behaviours which apply to a physical meeting, announced with paper leaflets distributed on the streets, of a small group of people in a physical room, surely do not apply to an audio/video/chat conference in Internet. What can happen if you do? Instead of the usual 20 participants, hundreds or thousands of people could show up and some could abuse the free access to audio/video/chat etc. to whichever purpose, including drugs, malware, pornography etc.

Is this a failure of the IT technology, of the service provider or of the lack of security awareness of the (new) user?

How long will it take to humanity to really comprehend the difference between the physical and the virtual world?

CacheOut: another speculative execution attack to CPUs

It was 2 years ago that we learned about Spectre and Meltdown, the first speculative attacks to CPUs which exploit hardware “bugs” and in particular the speculative and out-of-order execution features in modern CPUs. In the last 2 years a long list of attacks followed these initial two, and CacheOut is the last one.

CacheOut, here its own website and here an article about it, builds and improves over previous attacks and countermeasures like microcode updates provided by the CPUs makers, and Operating System patches. The CacheOut attack allows, in Intel CPUs, to read data from other processes including secret encryption keys, data from other Virtual Machines and the contents of the Intel’s secured SGX enclave.

Besides the effective consequences of this attack and the availability and effectiveness of  software countermeasures, it is important to remember that the only final solution to this class of attacks is the development and adoption of new and redesigned hardware CPUs. This will probably take still a few years and in the meantime we should adopt countermeasures based on risks’ evaluation so to isolate critical data and processes to dedicated CPUs or entire computers.

AutoCAD malware

Recently I have paid some attention to AutoCAD and similar software. Not that I use them or that know much about them, but it definitively striked me both the complexity and the amazing features that some of these applications have. But with complexity, large number of features and dimension of code, come also vulnerabilities, even security vulnerabilities.

A few days ago I noticed this article (here a less technical summary) about AutoCAD malware, which has been around for more than 10 years. The purpose of this malware can be twofold: just another malware infecting channel, or more likely, a very targeted attack channel. Indeed CAD software is used for designing buildings, bridges, tunnels, roads etc., and some blueprints can be worth millions. Companies have taken notice of this, and security features have been introduced in the applications.

But the issue which does not seem to be appreciated enough (I have no statistics though, so I can be wrong on this) is the patching process (and this is not limited to CAD software but applies to other specialised software as for example digital audio or gaming). It seems to me that some of these applications are seldom updated (one needs to download/buy a new version) or that security patches are bundled together with new functionalities which can come at a cost, at least after the initial few years of support.

In my opinion, in an ideal world security patches should be provided for free to anyone until the program is supported. Obviously this can have economical impacts on the company producing the software and could require changes in the way software is built, sold and distributed (costs again).

Meltdown and Spectre bugs should help improve IT Security

Yes, I want to be positive and look at a bright future. Everybody is now talking about the Meltdown and Spectre bugs (here the official site). I think that these Hardware bugs at the end will help improve the security of our IT systems. But we should not underestimate the pain that they could cause, even if it is too early to say this for certain since patches and countermeasures could be found for all systems and CPUs or, at the opposite, there could appear unexpected exploits.

The central issue is that IT and IT Security in particular, depend crucially on the correctness of the behaviour of the Hardware, first of all of the CPUs. If the foundation of the IT pillar is weak, sooner or later something will break. Let’s then hope that the Meltdown and Spectre bugs will help design more secure IT Hardware and, in the long run, improve IT Security as a whole.

Rowhammer, SGX and IT Security

I am following with interest the developments of the Rowhammer class of attacks and defenses, here there is one of the latest articles. (As far as I know, these are still more research subjects than real-life attacks.)

Already at the time of the Orange Book (or more correctly the “Trusted Computer System Evaluation Criteria – TCSEC”) in the ’80s, it was clear how important the hardware is in building the chain of trust on which IT Security relies.

Rowhammer attacks follow from a hardware security weakness, even if this weakness is also a hardware strength: the increase in density and decrease in size of DRAM cells, which allows to build memory banks with lower energy consumption and higher capacity. Unfortunately this allows the near-location memory bit-flipping that can give rise to a total compromise of the IT system, that is a Rowhammer attack. It is true that there exist memory banks with Error Correction Codes (ECC) which make the Rowhammer attacks quite hard, but these memory banks are more expensive, a little slower and available only on high-end server computers. One can look at it as a hardware feature which carried within an unexpected security weakness.

As it turns out, it seems very hard to find software measures which can detect, block or prevent Rowhammer attacks. Many different software defences have been proposed, but as of today none is really able to completely stop all Rowhammer types of attacks. A hardware weakness seems to require only hardware countermeasures.

To make the situation even more intriguing, the hardware-based Intel SGX security enclaves can be mixed-in in this scenario. Intel SGX is a hardware x86 instruction-set extension which allows to securely and confidentially execute programs in an isolated environment (called a “security enclave”). Nothing can directly look into a SGX security enclave, not even the Operating System, to the point that data can be computed in it even on systems controlled by an adversary (but SGX security enclaves are not immune from side-channel attacks). Rowhammer attacks cannot be performed from outside against programs running in a SGX security enclave. Vice-versa, a SGX security enclave in some conditions can run, without been detected, a Rowhammer software to attack the hardware and programs running on it. Overall it seems that Intel SGX security enclaves can provide extremely interesting IT security features but at the same time can also be abused to defeat IT security itself.

All of this becomes more worrisome when thinking of Virtual Machines and Cloud Services.

 

Complexity, abstraction and security

Reading news like this one, I wonder how we could improve managing IT security or just be able to keep up with the current development of IT. I see two main trends:

  • complexity: IT systems are getting more complex at a very high speed; every system should connect with every other, should provide an incredibile number of features to different users, should run on many different platforms, and so on
  • abstraction: to be able to manage this complexity, the approach is to abstract the programming and managing level of IT: programming can take advantage of existent modules and just connect them appropriately at the functional level, providing also functionalities to monitor and manage the IT system themselves (for example it is now possible to deploy entire applications or virtual infrastructures just with a couple of “clicks”).

But what about security? Even if each component is “secure” (according to some definition of this word), how can be evaluated the “security” of the current and future IT systems?

There is no doubt that IT security in the last years has been a difficult subject, but I believe that in the next future we’ll need some new approaches and tools to be able to tackle the management of IT security due to the ever increasing complexity of IT systems.

A Practical Look into GDPR for IT – Final Part 3

I have just published here the third and last article of my short series on the EU General Data Protection Regulation 2016/679 (GDPR) for IT.

In this final article I discuss a few points about the managing of data breaches and of the IT measures required to satisfy the citizens’ rights on their personal data managed by IT systems.

A Practical Look into GDPR for IT – Part 2

I have just published here the second article of my short series on the EU General Data Protection Regulation 2016/679 (GDPR) for IT.

In this article I discuss a few points about the risk-based approach requested by the GDPR which introduces the Data Protection Impact Assessment (DPIA), and a few IT security measures which should often be useful to mitigate risks to the personal data.

A Practical Look into GDPR for IT

I have just published here the first article of a short series in which I consider some aspects of the requirements on IT systems and services due to the EU General Data Protection Regulation 2016/679 (GDPR).

I started to write these articles in an effort, first of all for myself, to understand what actually the GDPR requires from IT, which areas of IT can be impacted by it and how IT can help companies in implementing GDPR compliance. Obviously my main interest is in understanding which IT security measures are most effective in protecting GDPR data and which is the interrelation between IT security and GDPR compliance.

On SHA1, Software Development and Security

It is a few years that it is known that the SHA1 Cryptographic Hash Algorithm is weak, and from 2012 NIST has suggested to substitute it with SHA256 or other secure hash algorithms. Just a few days ago it has been announced the first example of this weakness, the first computed SHA1 “collision”.

Since many years have passed from the discovery of SHA1 weaknesses and some substitutes without known weaknesses are available, one would expect that almost no software is using SHA1 nowadays.

Unfortunately reality is quite the opposite: many applications depend on SHA1 in critical ways, to the point of crashing badly if they encounter a SHA1 collision. The first to fall to this has been the WebKit browser engine source code repository due to the reliance of Apache SVN on SHA1 (see eg. here).  But also Git depends on SHA1 and one of the most famous adopters of Git is the Linux kernel repository (actually Linus Torvalds created Git to manage the Linux kernel source code).

For some applications to substitute SHA1 with another Hash algorithm requires to rewrite extensively large parts of the source code. This requires time, expertise and money (probably not in this order) and does not add any new features to the application! So unless it is really necessary or no way to keep using SHA1 and avoid the “collisions” is found, nobody really considers to do the substitution. (By the way, it seems that there are easy ways of adding controls to avoid the above mentioned “collisions”, so “sticking plasters” are currently applied to applications adopting SHA1).

But if we think about this issue from a “secure software development” point of view, there should not be any problem in substituting SHA1 with another Hash algorithm. Indeed designing software in a modular way and keeping in mind that cryptographic algorithms have a limited time life expectancy, it should be planned from the beginning of the software development cycle how to proceed to substitute one cryptographic algorithm with another of the same class but “safer” (whatever that means in each case).

Obviously this is not yet the case for many applications, which means that we have still to learn quite a bit on how to design and write “secure” software.