We know that IoTs are really critical for IT Security, and recently researchers at Check Point (see here, here and here for more details) have shown how to take over a WiFi network by exploiting remotely a vulnerability in some light-bulbs.
It was 2 years ago that we learned about Spectre and Meltdown, the first speculative attacks to CPUs which exploit hardware “bugs” and in particular the speculative and out-of-order execution features in modern CPUs. In the last 2 years a long list of attacks followed these initial two, and CacheOut is the last one.
CacheOut, here its own website and here an article about it, builds and improves over previous attacks and countermeasures like microcode updates provided by the CPUs makers, and Operating System patches. The CacheOut attack allows, in Intel CPUs, to read data from other processes including secret encryption keys, data from other Virtual Machines and the contents of the Intel’s secured SGX enclave.
Besides the effective consequences of this attack and the availability and effectiveness of software countermeasures, it is important to remember that the only final solution to this class of attacks is the development and adoption of new and redesigned hardware CPUs. This will probably take still a few years and in the meantime we should adopt countermeasures based on risks’ evaluation so to isolate critical data and processes to dedicated CPUs or entire computers.
Can we trust the information we find online?
The general answer is NO, but we all behave as if it was YES.
Personally I see example of it even when I look online for simple information like train schedules or traffic jam conditions. Ever happened to be warned of a major traffic jam ahead and find no traffic whatsoever? Did everybody hear the news and auto-magically disappear from the road?
At a very high level, we can consider two ways in which untrustable (misleading or plainly wrong) news are posted online:
- non-intentional or unwilling mistakes due to careleness, untrustable sources, even technical errors in collecting the data;
- intentional fake information, eg. “Fake News”, distributed for a purpose usually not moral or legal and at someone particular advantage.
The first goes in the “mistakes” category that hopefully sooner or later will be fixed, but the second goes in the “intentional attacks” category. Unfortunately misusing people trust and conditioning their opinions and actions with “Fake News” is becoming more and more common (just read the news themselves!), to the point that some of these techniques seem to have leaked also to everyday advertising and political campaigning.
Thinking about this, it came back to my mind the “Information Operations Kill Chain” which I read some time ago in Bruce Schneier’s blog here and which I suggest to read and consider.
PS. I am not aware of further developments on this, but if there are, please point them out.
A few days ago, a new attack has been made public which makes it easier to forge hash (or “message digests”) computed with SHA1 (see for example this article for more details).
This new collision attack makes it faster and less expensive to create two documents with the same digital signature using SHA1 or, having a digitally signed document where the digital signature uses SHA1 as hash algorithm, to create a different second document which has the same digital signature.
It has been known since a few years that SHA1 is broken and that it should not be used for digital signatures and in general for security purposes (actually NIST suggested to stop using SHA1 already in 2012). But as usual, in the real world there are still many, too many legacy applications which today continue to use SHA1, one example being legacy PGP keys and old PGP digital signatures.
This new result should be at least a useful reminder to all of us to check if we are still using SHA1 in some applications and in case finally update it.
We know very well that years ago we lost the concept of (security) network perimeter. Still too often we approach network security with the underlying idea of perimeter defenses: inside the permiter all is ok, and the “firewall” protects us from the outside world.
But in the current world of increasing Cloud / Software as a Service (SaaS) services and Software Defined Networking (SDN), it becomes increasingly impossible to manage IT security from the center of the traditional network and to deploy the protections on the edges. We need to manage the security of traditional and legacy applications, cloud applications, internal and mobile users, all at the same time and with a single approach.
From the networking security point of view this should require to look at our network as a (software defined) mesh of connections composed underneath by different backbones, trunks, local networks and VPNs. Security, access and privileges should be identity-driven and globally distributed on the network. This should imply that the preferred architecture to implement and govern such a security network should be cloud-based if not cloud-native.
If I understand correctly, this is, at least in part, the idea of the most recent approach to Network Security proposed by Gartner and called “Secure Access Service Edge – SASE” (see here and here for more infos).
Though I do not have one nor I tried one, Privacy and VPN routers like InvizBox, Anonabox, NordVPN, TorGuard VPN, and many others from well known brands (see here for example for a review), are becoming more common, easy to use also when travelling, and features loaded.
They typically allow to easily create private or commercial VPNs, establish Tor circuits and implement privacy filters on internet traffic. They are probably not as tight as Tails, but I expect that they are user friendly.
Though I never felt the need of a commercial VPN service, I would consider using a security and privacy internet router which I can carry with me and easily activate even when travelling.
In principle Ransomware is among the simplest malware possible: in its simplest form it does not require zero-day or other vulnerabilities, erroneous security configurations or absence of advanced security measures. It is enough to execute on the target machine some code, with the user’s privileges, which encrypts all user’s data.
All of us continuosly download data on our PCs, smartphones etc. by “surfing” the Web, receiving emails, interacting in social media etc. So spam campaigns, malvertising, drive-by downloads can easily deliver to any PC some Ransomware.
Whereas anti-malware, and in particular anti-ransomware, is often effective against it, the common security mantra of “patch, patch and again patch!” is not said to be that effective since ransomware in principle can avoid to exploit unpatched vulnerabilities.
But most important what is the target of Ransomware attacks?
Ransomware attacks remind us that computers manage primarily information, and the main purpose of the attack is to take hostage this information. What is it good for a computer system if all information it manages is removed and we remain only with the Operating System and the applications? Without a valid backup of the users’ information, most of the value of a computer system is lost, and thus the ransom is paid…
The International Committee of the Red Cross (ICRC) has published an interesting report on humanitarian consequences of cyber attacks, it can be downloaded here (PDF) and a short summary can be found here.
It is really difficult to realize how pervasive Information Technology (IT) and Internet are today, and what the consequences of cyber attacks can be on everyday life.
This recent paper “From Collisions to Chosen-Prefix Collisions – Application to Full SHA-1” by G. Leurent and T. Peyrin puts another nail in the coffin of SHA1. The authors present a chosen-prefix collisions attack to SHA1 which allows client impersonation in TLS 1.2 and peer impersonation in IKEv2 with an expected cost between 1.2 and 7 Million US$. The authors expect that soon it will be possible to bring down the cost to 100.000 US$ per collision.
For what concerns CA certificates, the attack allows, at the same cost, to create a rogue CA and fake certificates in which the signature includes the use of SHA1 hash, but only if the true CA does not randomize the serial number field in the certificates.
It is almost 15 years that it is known that SHA1 is not secure: NIST deprecated it in 2011, it should not have been used from 2013 and substituted with SHA2 or SHA3. By 2017 all browsers should have removed support for SHA1, but the problem is always with legacy applications that still use it: how many of them are still out there?
The US Cybersecurity and Infrastructure Security Agency (CISA) has recently reviewed the guidelines (actually a Binding Operational Directive for US Federal Agencies) on patching (see the Directive here and a comment here).
Now vulnerabilities rated “critical” (according to CVSS version 2) must be remediated within 15 days (previously it was 30 days) and “high” vulnerabilities within 30 days from the date of initial detection by CISA weekly scanning.
Due to the short time between detection and remediation of vulnerabilities, applying patches in time is going to be difficult, due to the possibily missing availability of patches by the vendors and the time needed anyway to test them in each IT environment.
This implies that there must be in place processes to design, test and deploy temporary countermeasures to minimize to an acceptable level the risks due to these vulnerabilities. And these processes must be fast, they should take at most a few days.