Interesting article on “7 Revealing Ways AIs Fail”
Interesting reading: here the NSA “Quantum FAQs”.
Cryptography is too often considered as the “final” solution for IT security. In reality cryptography is often useful, rarely easy to use and brings its own risks, including an “all-in or all-out” scenario.
Indeed suppose that your long-lived master key is exposed, then all possible security provided by cryptography is immediately lost, which it seems to be what happened in this case.
Very interesting research paper with a scary title “Solar Superstorms: Planning for an Internet Apocalypse“. It is about a Black Swan event which has actually already happened in 1859, a major solar Corona Mass Ejection (CME) which has some chance to happen in the next future. Without entering in any detail (the research paper is quite readable) the main point is that if a CME of 1859’s magnitude would hit earth today, the consequences would be catastrophic. Apart from the impacts on the electric grid, and in particular to the long distance power distribution (but power operator should be aware of this threat), the research paper points out that there would be severe damages to satellites, in particular low-orbit ones, with possible total failure of satellite communication including GPS, television broadcasting and data (internet) transmission. But equivalently at risk are long distance communication cables, more noticeable submarine optical fibre cables. Actually, optical fibres per se would not be affected, but optical repeaters along the fibres at distances of 50 – 150 km at the bottom of the oceans would burn out and stop almost all communication between continents.
I remember years ago discussing a similar scenario with some physicist friends and wondering if it could have been a threat or not. It seems that it can be, but is the cost of mitigating this threat worth it? Should we act today?
I have just been reading this post by Dolos Group researchers (see here also for another report) on a physical attack to an unattended/stolen laptop which can go under the name of an “Evil Maid” attack. Besides the technical expertise required to perform the attack balanced by its simplicity and speed, the consequences can be dire: the attacker can fully access the company’s network as the owner of the laptop from her/his own device.
We very well know that IT remote working brings in quite a few extra risks from the security point of view, and the ones related to a physical attack are usually not the first or more worrisome. Managing the IT security of a standalone PC in a company’s office is quite different from managing the IT security of a remote working laptop. How can we provide and monitor the IT security of remote working devices? Which is the best approach, at least as of today? Is it better to strengthen the IT security of portable devices starting from their hardware and up, or give up completely on it and adopt virtual/cloud (and/or VDI) desktop solutions, both approaches coupled with zero-trust, multi-factor authentication, security monitoring etc.?
In any case, it seems clear that the IT security risks associated to remote working have not yet been really well appreciated, in particular coupled with the explosion of remote working due to the pandemic.
I have been reading this article on DNS attacks which also reminds us that if an attacker controls a DNS server or is able to poison a DNS cache, malware or at least unsafe data can be transmitted to a target application. DNS data is anyway data in input to an application, and as such it should be validated prior to any processing. Instead it can happen that it is considered as trusted data, in particular if DNSSEC is used.
Unfortunately the situation is a little complex: with very few exceptions (as for example “DNS over HTTPS – DoH“), applications do not directly retrieve DNS data but make the Operating System do that for them. And any application trusts the Operating System on which is running, which should imply that data received from the Operating System should be trusted, or not? Which System Calls should an application trust and which it should not? Which kind of data validation should an application expect to be done by the Operating System on data passed through System Calls? The Devil and unfortunately the Security is in the details…
Which brings me to consider the concept of zero trust in this respect: first of all, in IT we always need to trust the security of something, at least of the Hardware (eg. the CPU), and unless we look at Confidential Computing (see here for example), then we need to trust also quite a lot of Software.
So we are back to Input Validation, which can make applications bigger and slower and in some cases even more difficult to maintain, but accepting in input only data in the format and content that we expect, make for sure life much safer.
Apple has announced (see here) a new iCloud premium feature called iCloud Private Relay:
when browsing with Safari, Private Relay ensures all traffic leaving a user’s device is encrypted, so no one between the user and the website they are visiting can access and read it, not even Apple or the user’s network provider. All the user’s requests are then sent through two separate internet relays. The first assigns the user an anonymous IP address that maps to their region but not their actual location. The second decrypts the web address they want to visit and forwards them to their destination. This separation of information protects the user’s privacy because no single entity can identify both who a user is and which sites they visit.
This seems very much a form of Onion Routing, which is all the theory and practice of TOR (The Onion Router, indeed). It will be very interesting to see how it will work out because there is the possibility of becoming a disruptive technology to improve the privacy and security of all of us when browsing the Internet.
Recently I frequently met discussions about passwordless authentication: is this myth finally becoming reality? It is at least 20 years that we have been discussing and announcing the demise of passwords.
Passwords can be substituted by biometrics, but also hardware tokens (eg. security keys), smartphones etc. together with authenticator apps, single-sign-on, identity federation and so on.
Is this enough to get rid of passwords?
Well, passwords are very cheap to manage and very scalable, well known, used and abused, possible to forget but not to break down or to be physically lost or stolen. And most systems will still use passwords / PIN codes as backup.
Already today access to most personal devices (smartphones, tablets, portables etc.) is passwordless, usually by biometrics, with password as backup. But this is very local to each personal device and it seems difficult to scale it up to all systems and applications.
So where do we really stand on the way to “passwordlessness”? How and when will we get there?