On non-malware, fileless attacks

It is spring again, and it is time for reports on IT Security or in-Security in 2016.

One thing caught my eye this year, and I am not sure if it is a trend, just a coincidence or my susceptibility: I noticed a comeback of fileless malware, also called counter-intuitively “non-malware”. This is malware which does not install itself on the filesystem of the target machine but instead can load part of itself in memory (RAM), uses tools of the Operating System (PowerShell, WMI etc.) and local applications, hides parameters and data for example in the Widows Registry.

Actually there is nothing really new here, the very old “macros viruses” were of this type. What has changed is that today personal computers and servers run for very long time (very few people switch completely off their computers daily, usually personal computers are just set to “sleep”), which gives a much longer persistence to this type of malware. Obviously fileless malware is more difficult to write and to maintain, but it is also more difficult to identify, that is it has more chances to escape detection by anti-malware and anti-virus programs. Moreover also pure behavioural analysis can be fooled by this type of malware, since it can use standard tools of the machine performing tasks just a little bit out of the ordinary. On the other side, in case of infection the malware is anyway present on the machine, so anti-malware tools have just to look better to find it.

Is it truly impossible to separate VMs running on the same HW?

More and more results appear, like this last one, on weaknesses and vulnerabilities of Virtual Machines running on usual (commodity) hardware. The most troublesome results are not due to software vulnerabilities, but rely only on the hardware architecture which supports it. If cryptographic private keys can be stolen and covert channels can be established evading the current isolation mechanisms provided by hardware and virtualization software (see also my previous posts on Clouds), how much can we trust at least the IaaS Cloud Services?

A Practical Look into GDPR for IT

I have just published here the first article of a short series in which I consider some aspects of the requirements on IT systems and services due to the EU General Data Protection Regulation 2016/679 (GDPR).

I started to write these articles in an effort, first of all for myself, to understand what actually the GDPR requires from IT, which areas of IT can be impacted by it and how IT can help companies in implementing GDPR compliance. Obviously my main interest is in understanding which IT security measures are most effective in protecting GDPR data and which is the interrelation between IT security and GDPR compliance.

Cloud Security and HW Security Features

Security is hard, we know quite well by now, but instead of getting easier it seems that, as time goes by, it is getting harder.

Consider Public Cloud: in a Public Cloud environment, the threat scenario is much more complex than in a dedicated, on-premises HW one. Assuming that in both cases the initial HW and SW configuration is secure, the threat scenario for services running on dedicated, on-premises HW consists of external attacks either directly from the network or mediated by the system users who could (unintentionally) download malware into the service. Instead the threat scenario in a Public Cloud environment must also include attacks from other services running on the same HW (other virtual machines, tenants, dockers etc.) and attacks from the infrastructure itself running the services (hypervisors on host machines).

Protecting virtual machines and cloud services from other machines and services running on the same HW and from the hypervisor is hard. New hardware features are needed to be able to effectively separate the guest services from each other and from the host. But even hardware features are not easy to design and implement to these purposes.

For example, Intel has introduced the SGX hardware extensions to create enclaves to manage in HW very sensitive data like cryptographic keys. In this paper it has been shown, as initially feared by Rutkowska, that these HW extensions both provide security features to the users but also to the attackers, who can exploit them to create practically invisible and undetectable malware. In the article it is actually shown in a particular scenario, how it is possible to recover some secret RSA keys used for digital signatures sealed in one enclave from another enclave. Since not even the hypervisor can see what there is in one enclave, the malware is practically undetectable.

IT security is a delicate balance between many factors, HW, SW, functionalities, human behaviour etc. and the more complex is this ecosystem, the easier it is to find loopholes in it and ways to abuse it.

On SHA1, Software Development and Security

It is a few years that it is known that the SHA1 Cryptographic Hash Algorithm is weak, and from 2012 NIST has suggested to substitute it with SHA256 or other secure hash algorithms. Just a few days ago it has been announced the first example of this weakness, the first computed SHA1 “collision”.

Since many years have passed from the discovery of SHA1 weaknesses and some substitutes without known weaknesses are available, one would expect that almost no software is using SHA1 nowadays.

Unfortunately reality is quite the opposite: many applications depend on SHA1 in critical ways, to the point of crashing badly if they encounter a SHA1 collision. The first to fall to this has been the WebKit browser engine source code repository due to the reliance of Apache SVN on SHA1 (see eg. here).  But also Git depends on SHA1 and one of the most famous adopters of Git is the Linux kernel repository (actually Linus Torvalds created Git to manage the Linux kernel source code).

For some applications to substitute SHA1 with another Hash algorithm requires to rewrite extensively large parts of the source code. This requires time, expertise and money (probably not in this order) and does not add any new features to the application! So unless it is really necessary or no way to keep using SHA1 and avoid the “collisions” is found, nobody really considers to do the substitution. (By the way, it seems that there are easy ways of adding controls to avoid the above mentioned “collisions”, so “sticking plasters” are currently applied to applications adopting SHA1).

But if we think about this issue from a “secure software development” point of view, there should not be any problem in substituting SHA1 with another Hash algorithm. Indeed designing software in a modular way and keeping in mind that cryptographic algorithms have a limited time life expectancy, it should be planned from the beginning of the software development cycle how to proceed to substitute one cryptographic algorithm with another of the same class but “safer” (whatever that means in each case).

Obviously this is not yet the case for many applications, which means that we have still to learn quite a bit on how to design and write “secure” software.

Hardware Vulnerabilities (Again), Cloud and Mobile Security

Have a very Happy New Year!

… and to start 2017 on a great note, I write again about Hardware Vulnerabilities with comments on Cloud and Mobile Security.

The opportunity for this blog entry has been provided to me by the talk “What could possibly go wrong with <insert x86 instruction here>? Side effects include side-channel attacks and bypassing kernel ASLR” by Clémentine Maurice and Moritz Lipp at Chaos Computer Club 2016 which I suggest to watch (it lasts 50 minutes and it is not really technical despite its title).

A super-short summary of the talk is that it is possible to mount very effective side- (in particular time-) channel attacks on practically any modern Operating System which allow to extrafiliate data, open communication channel and spy on activities like keyboards inputs. All of this using only lecit commands and OS facilities, but in some innovative ways.

The reason for which these attacks are possibile is that the hardware does not prevent them, actually some hardware features, added to improve performances, make these attacks easier or even possible (see also my previous post on Hardware Vulnerabilities about this). So from the Security point of view these Hardware features should be considered as Vulnerabilities.

What is it possible to do with these techniques? Considering Cloud, it is possible to monitor the activities of another Virtual Machine running on the same hardware, extract secrect cryptography keys (but this depends on how the algorithm and protocols are implemented), establish hidden communication channels etc.

Similarly for Mobile, it is possible to have a totally lecit App to monitor the keyboard activity, or 2 Apps to establish a hidden communication so that one reads some data and the other sends it to a remote destination, all without violating any security rule (actually each one having very limited privilegies and restricted setups).

Morevoer it seeems easy to embed this kind of attacks in lecit applications and current anti-virus seem to lack the capabilities needed to intercept them. Indeed the activites performed to implement these attacks look almost identical to the ones performed by any program and it seems that only a particular performance monitoring could discover them.


On Denial of Service attacks and Hardware vulnerabilities

Denial of Service attacks are growning and getting the attention of the news: some of the latest incidents are krebonsecurity , OVH and Dyn. The economics behind these attacks are helping the attackers: today it costs little to mount a devastating DDoS attack able to block even a sizable part of Internet, thanks to all the botnets of unsafe machines, from PCs to routers and IoTs. Defence can be much more expensive than attack, and in some cases even than the ransom.

How did we get in this mess? This trend is not good at all, these attacks could threaten Internet itself, even if this would not be in the interest of the attackers (not considering State sponsored ones).

Fixing the current situation will be extremely expensive, many devices cannot be “fixed” but need just to be replaced. But before doing that, we need to build “secure” devices and design networks and protocols that support them and are somehow interoperable with the current ones. How? And When?

At the same time, a new trend is emerging: security vulnerabilities in Hardware.

The Rowhammer bug and its recent implementations in Virtual machines and Adroid phones (DRAMMER) or the ASLR vulnerability can open new scenarios. Hardware must provide the foundation of the security of all IT processing: data should be protected, accesses should be controlled etc. But we are discovering that the Hardware that we have been relying upon for the development of IT in the last 20 years, could have reached its limits. New security features are needed (see for example this) and vulnerabilities are discovered that must be managed, and not always it will be possible to fix them in software.

On the Security of Modern Cryptography

The security of modern cryptography is based on number-theoretic computations so hard that the problems are practically impossible for attackers to solve. In practice this means that approaches and algorithms to crack the cryptographic algorithms are known but with the current best technologies it would take too many years to complete an attack.

But what if a shortcut is found at least in some particular cases?

This is exactly what some researches [article, arstechnica] have just found for the Diffie-Hellman (DH) algorithm with 1024bit keys, algorithm which is one of the pillars of the security of Web transactions among many other uses. The researchers have shown that for DH with 1024bit keys there exist some parameters (prime modulus) that allow with current technologies to compute the secret encryption keys in short time. In other words, some parameters adopted in DH-1024 can contain invisible trapdoors. The only ways to securely use DH today seem to be:

  • to know how the parameters have been generated and to be sure that they do not allow for any “trapdoor”
  • or to use DH with 2048bit or larger keys.

What does this teach us about the security that cryptography provides to everyday IT?

How should we implement and manage cryptography within IT security?

Is cryptography joining the “zero days => vulnerabilities => patch management” life-cycle which has become one of the landmarks of current IT security?

Yahoo Breach and GDPR

The Yahoo breach (see here for example) is almost yesterday news (today we are talking about DDoS: in 8 days the record went from 363Gpbs to 620Gpbs, and finally to almost 1Tbps, scary!), but I am now trying to picture such an event in view of the forthcoming European GDPR. My ideas are not too clear about what could be the consequences of the new Regulation (not of the data breach). I expect that in the next year before the Regulation will go into action, we’ll get a better understanding of its consequences.

The EU’s Network and Information Security (NIS) Directive

A few days ago the European Parlament has adopted the “Network and Information Security (NIS)” Directive (PE-CONS 26/16 Lex 1683). Together with the recently approved “General Data Protection Regulation”, it could provide the EU marketplace with strong incentives to dramatically enlarge and improve the approach to IT and/or Cyber Security.

For both regulations the timeframe is probably long, at least 2 years, most probably 4, so we should understand the effects of these new regulations likely by 2020. Still the entire ecosystem of IT and/or Cyber Security can only benefit from this interest “from the top”.