On Trust and Security

Since a few months we have been reading and discussing the Snowden’s documents. Most of the information present in these NSA documents is not new since we have been discussing the possibility of similar facts at lenghts in many occasions. For example, years ago the modifications introduced in the cryptographic algorithm DES by the NSA led initially to suspicions: were they backdoors or algorithm improvements? (In this case later it turned out to be improvements.)

The real difference is that now we know that our worst suspicions in many recent cases were correct.

So what can or should we do? This is a very interesting and hard question since the main issue in my opinion is that we are mostly dealing with the possible introduction of backdoors in hardware and software, for example to weaken cryptographical algorithms. As normal, even if technical-savy, users we do not have personally the competences nor the resources to verify that all hardware and software we use, from mobile phone to super-computers, are clean of backdoors. So we have to trust third parties, in particular hardware and software makers, that hardware, operating systems, applications, libraries (in particular cryptographic libraries) etc. do not have hidden functionalities or backdoors.

This is not new, we trust car, train, airplane makers with our life, so we should also trust hardware and software makers with our information, or not?

Is our trust in today ICT companies well-founded?

More Trouble for SSL/TLS

Besides CRIME, BEAST and Lucky13, two new attacks for SSL/TLS have been just announced. One attack exploits weaknesses in the RC4 cypher, which is used by most websites starting from Gmail, and many cryptographers had been thinking about this possibility for a long time, now they found out how. The second attack, called TIME; is a new timing attack, in part a refinement of CRIME.

As of today, both attacks are not practical, but they could become real threats in the future. Notice that the adoption of RC4 by many websites has been mostly to withstand BEAST attacks. Now that Lucky13 and this new attack aim at RC4, it is not clear what to do in practice.

Of course, we should seriously consider what to do with SSL/TLS and even more the CA model, but it will take a long time and I do not see among the big internet players, enough motivation or incentive to change the current situation.

You can find a summary description of these new attacks for example in this article by ArsTechnica.

Decrypting your Frozen Mobile Phone

The idea is not new, but the implementation is new, interesting and eye-catching. Tilo Müller and Michael Spreitzenbarth of FAU managed to implement FROST: “Forensic Recovery of Scrambled Telephones”.

The story in brief goes like this: Android phones from version 4.0, have a built-in option to encrypt all data on the storage device. Obviously data is decrypted on the fly when needed and stored not encrypted in the memory (RAM). When not needed anymore or the phone is turned-off, all un-encrypted data in RAM is very carefully deleted.

So what you do is to remove the battery with the phone on and then immediately restart the phone performing a so-called “cold.-boot”. In principle by removing the power all data is lost in the RAM and maintained (encrypted) only on the storage device. But it takes some (short) time for the RAM to forget all data, and this time depends on the kind/material of RAM chip and its temperature. Müller and Spreitzenbarth discovered that if the temperature of the chip in some Galaxy Nexus devices is below 10 degrees Celsius (please put your phone in the fridge…) then you have just enough time to read the unencrypted data in the RAM after the cold-boot, without need to know the password or PIN.

Enjoy the pictures on their website!

Cryptography as Liability Shift

Everybody seems to be talking about Kim Dotcom’s new Mega file-sharing service. What arouse my interest is the use of cryptography. I did not look into it in details but from what I read (for example here) it should work as follows:

  • access to the service is only through a browser and the code executed in the browser is written in JavaScript;
  • at the moment of creating an account, the users chooses a password which is not sent to the server, instead from the password the browser derives an authentication token which is shared with the server; the server cannot recover the password from the authentication token;
  • the browser generates an encryption key with which all data sent and received by the server is encrypted and decrypted by the browser itself: the server hosts only encrypted data and has no access to it since it does not know the encryption key;
  • for user convenience, the encryption key is stored also on the server but encrypted with the password, in this way the server has no access to the encryption key but the user can retrieve it when he needs it.

This general idea of this approach is of course not new, but the way in which it is implemented leaves a lot to think about. Everything is delegated to the browser and implemented in JavaScript, this means for example that it is not so hard to add (or modify) a JavaScript to obtain the password and send it to any other server. This is indeed what most online banking malware do by injecting in the browsers a few lines of JavaScript in the home pages of the online banking websites’ pages. This is also what at any moment the Mega service could do, without the user ever know it. On top of this, many other points should be checked like for example if the implementation is sound from a security point of view, and we very well know that it is not at all easy to implement cryptographic protocols the first time without making any mistake.

So what is the point of all of this? Well, the first thing that comes to my mind (and to other bloggers’ mind) is that the primary purpose of all of this is not to protect the users and the users’ data, but to allow Kim Dotcom to discharge any legal responsibility on the contents hosted on his servers. Since users’ data is encrypted by the browser on the users’ PC and the server has no access in clear to the encryption keys, then all the responsibility on the data hosted on his servers is of the users.

I am curious to see how this will develop.

UPDATE: there are already search engines to find illegal material to download from Mega…