A Practical Look into GDPR for IT – Final Part 3

I have just published here the third and last article of my short series on the EU General Data Protection Regulation 2016/679 (GDPR) for IT.

In this final article I discuss a few points about the managing of data breaches and of the IT measures required to satisfy the citizens’ rights on their personal data managed by IT systems.

On Manufacturing, IoTs and IT Security

Since many years we are quite used to the fact that products, of any kind, contain digital and electronic components. The process of manufacturing products and integrating digital and/or electronic components is by now quite well established and robust. The most important requirements to the digital / electronic components is that they perform their tasks correctly, effortlessly and that they are reliable. Security is mostly perceived as safety for example from electric shock or from the behaviour of the product induced by the digital / electronic components. It is not important that the digital component has features which are not used by the product, or that it has been designed for other purposes as far as it performs correctly as a component of the product.

But the scenario changes dramatically if the digital component is connected to a network, in particular Internet. In this case the product becomes part of the “Internet of Things” (IoTs). Then the security perspective changes completely. For example, those unused features of the digital component, if not correctly configured and managed, can be abused and become a serious security threat. What bad can be done with a washing machine connected to Internet? Difficult to say, but if out of imagination one can always try to join the washing machine to a botnet for distributed denial of service (DDoS) attacks.

So the manufacturer should also take care of the full IT security of any digital / electronic component embedded in its products. This means that even unused features must be configured, managed and updated.

But this is not all. The interaction between components in a product can create new type of security threats, which can be considered like side-channel threats and attacks. The abuse and misuse of digital components can be quite inventive, for example recently in the news I have noticed the following:

  • how to use a scanner to communicate through a laser mounted on a drone with a malware on a PC (see eg. this article)
  • how a smartphone or laptop’s ambient light sensor can be used to steal the browsing history from the device (see eg. this article)
  • how to install malware on a Smart TVs using the DVB terrestrial radio signals (see eg. this article)

and others concerning light-bulbs, surveillance cameras etc.

Typically in IT security one has first to describe clearly what are the threat scenarios and based on these to evaluate the risks and the security measures needed to mitigate these risks. In the case of IoTs it seems very difficult to imagine all possible threat scenarios due to the interaction between embedded digital Internet-connected components and the other product’s components.

Even more difficult is to imagine how, in the current markets, manufacturers of products like lightbulbs, refrigerators, television sets and more or less anything else one can imagine, can devote time and money to the security of embedded digital components produced by someone else, which should just work, cost as little as possible and not be maintained.

PS. Products like cars, airplanes etc. in regulated sectors, should constitute a welcome exception to this, thanks to the very stringent safety concerns and rules that apply to them.

PPS. Also of interest is this, just appeared, Microsoft whitepaper on Cybersecurity Policy for IoTs.

A Practical Look into GDPR for IT – Part 2

I have just published here the second article of my short series on the EU General Data Protection Regulation 2016/679 (GDPR) for IT.

In this article I discuss a few points about the risk-based approach requested by the GDPR which introduces the Data Protection Impact Assessment (DPIA), and a few IT security measures which should often be useful to mitigate risks to the personal data.

On non-malware, fileless attacks

It is spring again, and it is time for reports on IT Security or in-Security in 2016.

One thing caught my eye this year, and I am not sure if it is a trend, just a coincidence or my susceptibility: I noticed a comeback of fileless malware, also called counter-intuitively “non-malware”. This is malware which does not install itself on the filesystem of the target machine but instead can load part of itself in memory (RAM), uses tools of the Operating System (PowerShell, WMI etc.) and local applications, hides parameters and data for example in the Widows Registry.

Actually there is nothing really new here, the very old “macros viruses” were of this type. What has changed is that today personal computers and servers run for very long time (very few people switch completely off their computers daily, usually personal computers are just set to “sleep”), which gives a much longer persistence to this type of malware. Obviously fileless malware is more difficult to write and to maintain, but it is also more difficult to identify, that is it has more chances to escape detection by anti-malware and anti-virus programs. Moreover also pure behavioural analysis can be fooled by this type of malware, since it can use standard tools of the machine performing tasks just a little bit out of the ordinary. On the other side, in case of infection the malware is anyway present on the machine, so anti-malware tools have just to look better to find it.

Is it truly impossible to separate VMs running on the same HW?

More and more results appear, like this last one, on weaknesses and vulnerabilities of Virtual Machines running on usual (commodity) hardware. The most troublesome results are not due to software vulnerabilities, but rely only on the hardware architecture which supports it. If cryptographic private keys can be stolen and covert channels can be established evading the current isolation mechanisms provided by hardware and virtualization software (see also my previous posts on Clouds), how much can we trust at least the IaaS Cloud Services?

A Practical Look into GDPR for IT

I have just published here the first article of a short series in which I consider some aspects of the requirements on IT systems and services due to the EU General Data Protection Regulation 2016/679 (GDPR).

I started to write these articles in an effort, first of all for myself, to understand what actually the GDPR requires from IT, which areas of IT can be impacted by it and how IT can help companies in implementing GDPR compliance. Obviously my main interest is in understanding which IT security measures are most effective in protecting GDPR data and which is the interrelation between IT security and GDPR compliance.

Cloud Security and HW Security Features

Security is hard, we know quite well by now, but instead of getting easier it seems that, as time goes by, it is getting harder.

Consider Public Cloud: in a Public Cloud environment, the threat scenario is much more complex than in a dedicated, on-premises HW one. Assuming that in both cases the initial HW and SW configuration is secure, the threat scenario for services running on dedicated, on-premises HW consists of external attacks either directly from the network or mediated by the system users who could (unintentionally) download malware into the service. Instead the threat scenario in a Public Cloud environment must also include attacks from other services running on the same HW (other virtual machines, tenants, dockers etc.) and attacks from the infrastructure itself running the services (hypervisors on host machines).

Protecting virtual machines and cloud services from other machines and services running on the same HW and from the hypervisor is hard. New hardware features are needed to be able to effectively separate the guest services from each other and from the host. But even hardware features are not easy to design and implement to these purposes.

For example, Intel has introduced the SGX hardware extensions to create enclaves to manage in HW very sensitive data like cryptographic keys. In this paper it has been shown, as initially feared by Rutkowska, that these HW extensions both provide security features to the users but also to the attackers, who can exploit them to create practically invisible and undetectable malware. In the article it is actually shown in a particular scenario, how it is possible to recover some secret RSA keys used for digital signatures sealed in one enclave from another enclave. Since not even the hypervisor can see what there is in one enclave, the malware is practically undetectable.

IT security is a delicate balance between many factors, HW, SW, functionalities, human behaviour etc. and the more complex is this ecosystem, the easier it is to find loopholes in it and ways to abuse it.

On SHA1, Software Development and Security

It is a few years that it is known that the SHA1 Cryptographic Hash Algorithm is weak, and from 2012 NIST has suggested to substitute it with SHA256 or other secure hash algorithms. Just a few days ago it has been announced the first example of this weakness, the first computed SHA1 “collision”.

Since many years have passed from the discovery of SHA1 weaknesses and some substitutes without known weaknesses are available, one would expect that almost no software is using SHA1 nowadays.

Unfortunately reality is quite the opposite: many applications depend on SHA1 in critical ways, to the point of crashing badly if they encounter a SHA1 collision. The first to fall to this has been the WebKit browser engine source code repository due to the reliance of Apache SVN on SHA1 (see eg. here).  But also Git depends on SHA1 and one of the most famous adopters of Git is the Linux kernel repository (actually Linus Torvalds created Git to manage the Linux kernel source code).

For some applications to substitute SHA1 with another Hash algorithm requires to rewrite extensively large parts of the source code. This requires time, expertise and money (probably not in this order) and does not add any new features to the application! So unless it is really necessary or no way to keep using SHA1 and avoid the “collisions” is found, nobody really considers to do the substitution. (By the way, it seems that there are easy ways of adding controls to avoid the above mentioned “collisions”, so “sticking plasters” are currently applied to applications adopting SHA1).

But if we think about this issue from a “secure software development” point of view, there should not be any problem in substituting SHA1 with another Hash algorithm. Indeed designing software in a modular way and keeping in mind that cryptographic algorithms have a limited time life expectancy, it should be planned from the beginning of the software development cycle how to proceed to substitute one cryptographic algorithm with another of the same class but “safer” (whatever that means in each case).

Obviously this is not yet the case for many applications, which means that we have still to learn quite a bit on how to design and write “secure” software.