Whats’s happening in Cyber/IT Security?

Comparing current reported cyber/IT security threats, attacks and incidents to what happened a few years ago, it seems to me that  something has surely changed (I must warn that these conclusions are not based on statistics but on reading everyday bulletins and news).

On one side, security surely has improved: vulnerabilities are reported and fixed, patches are applied (at least more often), security policies, standards and practices are making a difference. Still managing password and properly configuring systems and services exposed on Internet remain very difficult tasks too often performed without the required depth.

But security has improved, which also means that attackers have been moving to easier and more lucrative approaches which have to do mostly with the “human interface”. In other words: fraud.

The first example is ransomware, that is the attacker is able to access the victim system, copy vast amount of data, then encrypt it or remove it and finally ask a ransom not only to return the data but also to avoid making it public on Internet. Since everybody is getting better in making backups, here the important point is the “making it public on Internet” so that the ransom is asked more to prevent sensitive data to be published than to restore the systems.

The second example is Targeted Phishing attacks, Business Email Compromise and similar scams in which the attacker impersonate a well known or important person by writing emails, letters, making phone calls etc. to convince typically a clerk but in some cases also a manager, to send a large amount of money to the wrong bank account.

Neither of these two types of attacks is new, but now they are filling the news daily. Even if cyber/IT security can still improve tremendously, there have been and there are notable security improvements which makes it that attacks are aimed more often to the weakest link: the human user.

Information, Science, Common Sense and Fake News

These days we drown in COVID-19 news, some reliable, some less, some … unclear.

One striking point is the difficulty of communicating scientific facts, whenever they are available, and how they get twisted, misinterpreted and even manipulated by almost everyone: from top leaders, politicians, journalists, communicators, “evangelists”, scientists down and down to even (hopefully not too much) myself.

Having a scientific background, I have been struggling to make sense of the numbers we can read on the news or social media, hear in television or on internet streaming. A few times I tried to make sense of it scientifically, always ending up in “not enough data” or “source and meaning of data unclear”. Numbers do not lie, but only if you know what they mean.

I tried to make sense of the information submerging us using “common sense” (whatever that means) and understanding even less. Actually I realised that “common sense” often leads us closer to give more credibility to Fake News than to authentic information.

This made me think back to the time when I was at the university doing physics. It is well known that in the scientific research world there are at least two major kinds of communicating issues:

  1. communicating with fellow researchers
  2. communicating with the public

and the second is a much bigger issue than the first.

Communication among fellow researchers has to be extremely clear and detailed, all data must be presented and clearly defined otherwise misunderstanding and misinterpretations are the inevitable consequence.  It is possible that a great number of different “opinions” of the experts we hear every day, are indeed based on this type of incomplete communication.

But these days this gets worse since these “opinions” are rushed to the public who understands something else entirely by using common sense and not the appropriate scientific approach. Of this I had personal experiences when I was asked by friends to explain some concepts of elementary particle physics: describing these concepts by means of analogies and not well defined ideas, often led my friends to conclusions which had nothing to do with physics, were wrong or plainly absurd. Obviously it was not my friends’ fault but mine: all of us interpret the information we receive with the logical tools we have, even if none is really appropriate.

So where does this bring us concerning COVID-19? Scientific information should be clearly and precisely presented to the public and in particular to our leaders and politicians, in a form in which data can be logically understood by the recipient. This is difficult but particularly important for information presented to the public by the news and by all information channels in times of crisis such as the one we are experiencing right now. Otherwise misunderstanding, misinterpretation and manipulation of information become too easy and common. Unfortunately recently this seems to have happened too often.

Cryptography for a COVID-19 Contact Tracing App by Apple and Google

Apple and Google (in alphabetic order) have released a draft of a cryptographic protocol named Contact Tracing (here the specification) for a new Privacy-preserving Bluetooth protocol to support COVID-19 Contact Tracing. As far as I understand (please correct me if I have misunderstood something), it should work as follows:

  • Bluetooth LE is extended on the devices with this new procotol
  • A service provider distributes an App which makes use of the protocol and communicates with a server managed by the service provider or a third party
  • Users install the App on their devices and keep Bluetooth on
  • When two devices with the App installed are nearby, they exchange some locally generated cryptographic key material called Rolling Proximity Identifier: these identifiers are privacy preserving, that is from the identifier alone it is not possible to identify the device which originated it; all Rolling Proximity Identifiers are stored only locally on the devices themselves (both originator and receiver)
  • When a user tests positive to COVID-19, she or he inserts this information in the App which then generates a set of cryptographic key material called Diagnosis Keys corresponding to the days in which the users could have been already infected; the App then sends the Diagnosis Keys to the server which distributes them to all other devices on which the App is running
  • When an App receives from the server some Diagnosis Keys, it is able to compute a set of Rolling Proximity Identifiers and to check if at least one is present in the local storage; if there is a match, the information derived is that on a certain day, in a 10 minutes time interval, the user of the App has been in proximity with a person who later tested positive to COVID-19.

Obviously a Privacy pre-requisite to all this is that neither server nor App manage or store any other information or metadata about the users and the devices on which the App runs.

Perception of IT Security in Times of Crisis

What does happen to IT Security in times of crisis? The change of behaviour and rush to provide new services can be detrimental to IT Security. We are witnessing it these days. From a broad perspective, there are at least two major sides to it: IT service providers and (new) users.

In case of an crisis like COVID-19 which we are just now experiencing, IT service providers are requested to provide new services in extremely short time and to improve the already existing ones. But in IT, rush does not usually go well with security. Take as an example what has just happened in Italy: the government has charged the state pension fund (INPS) to provide subsidies for COVID-19 to no less than 5 million people (INPS usually provides pension to more than 16 million people in Italy). Obviously due to the full lock-down of the country, the procedure has to be done online. So the new IT services to request and manage the subsidies went online, and according to the news, an epic IT security failure followed: availability and in part integrity and confidentiality were violated.

Is it possible to manage in IT at the same time extremely tight and aggressive schedules and security? I believe so, but only if security is embedded in everything we do in IT.

But I believe that IT security, at least for the IT as it is nowdays, requires also the users to be aware of it and to behave accordingly. Due to COVID-19, we have all been required or strongly adviced to move many activities to online services, from work to school, shopping etc. But the physical world has very different characteristic from the virtual Internet world.

For example, consider the case of a small local pro-bono association whose members used to meet in person periodically: access to the room is free, and there is freedom of speech and contribution to the association. Now think about moving these meetings to an audio/video conference in Internet, publicly announced, with free entrance and free access for all participants to audio, video, chat etc.: is this the same thing?

Definitely not.

The rules and behaviours which apply to a physical meeting, announced with paper leaflets distributed on the streets, of a small group of people in a physical room, surely do not apply to an audio/video/chat conference in Internet. What can happen if you do? Instead of the usual 20 participants, hundreds or thousands of people could show up and some could abuse the free access to audio/video/chat etc. to whichever purpose, including drugs, malware, pornography etc.

Is this a failure of the IT technology, of the service provider or of the lack of security awareness of the (new) user?

How long will it take to humanity to really comprehend the difference between the physical and the virtual world?

Hacking Satellites

Not a feat for everybody, but hacking satellites either connecting directly to them or by intrusion on the ground computers that manage them, could have dire consequences: from shutting them down, to burning them in space, spiralling them to ground or turning them into ballistic weapons.

Even if news have not been really confirmed and details are sketchy, it seems that some incidents already happened, starting from 1998, see the ROSAT satellite history, and more recent events as described here, here, here and here for a recent review.

Independently from the confirmation of the incidents, controlling by remote satellites, in particular small ones built also with off-the-shelves / commodity components, coupled with the difficulty (if not impossibility) of applying security patches, can make their “Cybersecurity” risks quite relevant, and effective counter-measures quite difficult. On the other side, due to the costs of building and sending a satellite in space, it is likely that these “Cybersecurity” risks are considered and effectively managed in the planning and developing phases of a satellite life-cycle, or at least so we hope.

CacheOut: another speculative execution attack to CPUs

It was 2 years ago that we learned about Spectre and Meltdown, the first speculative attacks to CPUs which exploit hardware “bugs” and in particular the speculative and out-of-order execution features in modern CPUs. In the last 2 years a long list of attacks followed these initial two, and CacheOut is the last one.

CacheOut, here its own website and here an article about it, builds and improves over previous attacks and countermeasures like microcode updates provided by the CPUs makers, and Operating System patches. The CacheOut attack allows, in Intel CPUs, to read data from other processes including secret encryption keys, data from other Virtual Machines and the contents of the Intel’s secured SGX enclave.

Besides the effective consequences of this attack and the availability and effectiveness of  software countermeasures, it is important to remember that the only final solution to this class of attacks is the development and adoption of new and redesigned hardware CPUs. This will probably take still a few years and in the meantime we should adopt countermeasures based on risks’ evaluation so to isolate critical data and processes to dedicated CPUs or entire computers.

Trust on online information, Fake News and the Information Operations Kill Chain

Can we trust the information we find online?

The general answer is NO, but we all behave as if it was YES.

Personally I see example of it even when I look online for simple information like train schedules or traffic jam conditions. Ever happened to be warned of a major traffic jam ahead and find no traffic whatsoever? Did everybody hear the news and auto-magically disappear from the road?

At a very high level, we can consider two ways in which untrustable (misleading or plainly wrong) news are posted online:

  1. non-intentional or unwilling mistakes due to careleness, untrustable sources, even technical errors in collecting the data;
  2. intentional fake information, eg. “Fake News”, distributed for a purpose usually not moral or legal and at someone particular advantage.

The first goes in the “mistakes” category that hopefully sooner or later will be fixed, but the second goes in the “intentional attacks” category. Unfortunately misusing people trust and conditioning their opinions and actions with “Fake News” is becoming more and more common (just read the news themselves!), to the point that some of these techniques seem to have leaked also to everyday advertising and political campaigning.

Thinking about this, it came back to my mind the “Information Operations Kill Chain” which I read some time ago in Bruce Schneier’s blog here and which I suggest to read and consider.

PS. I am not aware of further developments on this, but if there are, please point them out.

Another Nail in the Coffin of SHA1

A few days ago, a new attack has been made public which makes it easier to forge hash (or “message digests”) computed with SHA1 (see for example this article for more details).

This new collision attack makes it faster and less expensive to create two documents with the same digital signature using SHA1 or, having a digitally signed document where the digital signature uses SHA1 as hash algorithm, to create a different second document which has the same digital signature.

It has been known since a few years that SHA1 is broken and that it should not be used for digital signatures and in general for security purposes (actually NIST suggested to stop using SHA1 already in 2012). But as usual, in the real world there are still many, too many legacy applications which today continue to use SHA1, one example being legacy PGP keys and old PGP digital signatures.

This new result should be at least a useful reminder to all of us to check if we are still using SHA1 in some applications and in case finally update it.