Thoughts on Blue/Red/Purple Teams and defending from Targeted Attacks

Defending against Targeted Attacks (and even more against Advanced Persistent Threats, APT) is difficult and usually quite expensive. 

We all know the basis of IT security, from cybersecurity awareness and training to anti-malware, firewall and network segmentation, hardening and patching, monitoring and vulnerability assessments / penetration tests (VA/PT),  third-party cybersecurity contract clauses, etc.

But this is not enough. We need also Single-Sign-On (SSO, or even Federated Authentication) and Multi-Factor-Authentication (MFA), Zero Trust architectures, tracing of all local, remote and mobile activities (networks and hosts), SIEM data collection/management and SOC analysis, a cybersecurity Incident Team and an Incident Response plan.

But to defend against Targeted Attacks we need to go another step further. We have designed and implemented all security measures we could think of, but are they enough? Did we forget something? For sure we are ready against an everyday malware attack, but a Targeted Attack which could take months to study us and be implemented?

To answer this question it seems that the only possibility is to think and act as the attacker and look at our IT environment from this point of view. It is here that Blue, Red and Purple teams enter into play as they play the roles of attackers and defenders on our IT environment to test our cybersecurity standing to its limits. They will find holes and access paths we did not think about or even believe possible, but also smarter ways to defend ourselves.

But … what about a Risk Based approach to Security?

In other words, how much is it going to cost us?

Can we afford it?

Finally, is it worth going “all out” or, by accepting some risks, we can continue to do what we have been doing all along in cyber/IT-security? And in this case, how do we evaluate these “Risks” we need to accept?

PS. The last is partly a rhetorical question on my side.

Another Example of how Implementing Cryptography is Tricky (and a Score 10 CVE)

It has recently been published the description of Zerologon, CVE-2020-1472 (see here for a summary and here for the technical paper), and do not worry since the bug has already been patched by Microsoft in August (see here). 

The bug allows anyone who can connect in TCP/RPC/Netlogon to an unpatched Active Directory domain controller to become a domain administrator, nothing else needed. The cause of this bug is a minor glitch in the implementation of the cryptographic algorithm AES-CFB8: the Initialisation Vector has been kept fixed at zero instead to be unique and randomly generated (more details are provided in the technical paper mentioned above). 

Zero Trust and Dynamical Perimeters based on Identity

NIST has recently published the final version of SP 800 207 “Zero Trust Architecture” which is a recommended reading.

This gives me the opportunity to consider how vastly the IT architecture has changed in the last 20 years. From the concept of a single IT physical perimeter, we now have multiple physical or virtual perimeters which can be dynamic due for example to Software Defined Networks or Cloud services.

But most importantly who and what is inside a perimeter, which can be even a single application, depends not only on the physical and/or virtual location of the device (both server and/or client) but on the identification / authentication / authorisation of the user and/or the device. So, given the proper identification / authentication / authorisation, a user and its device can be admitted inside a high security perimeter even when connecting from any network in the world. 

Moreover, authentication and authorisation are not “once for ever” but each, even tiny, perimeter should perform them again. This requires strong authentication processes which both authenticate the user and also her/his device and its security. Often this process can be done in two steps: the user authenticates her/him-self to the local (portable) device typically with MFA / Biometrics etc., and the device then manages the authentication to the remote services thus providing a simpler user experience.

This is the development we see every day in most major IT / Cloud services, and which, sooner or later, will also lead to decrease our dependency on the use of Passwords. 

Subject Matter Experts On Artificial Intelligence and IT Crime

Artificial Intelligence (AI), in all its different fields from Machine Learning to Generative Adversarial Networks, has been subject to a study (here the link to the paper), or probably better an evaluation, by a group of Subject Matter Experts (SMEs) to identify the most risky scenarios in which attackers could use it, abuse it or defeat it. The scenarios include cases in which AI is used for security purposes and an attacker is able to defeat it, or AI is used for other purposes and an attacker is able to abuse it to commit a crime, or an attacker uses AI to build a tool to commit a crime.

Overall the SMEs have identified 20 high level scenarios and ranked them by multiple criteria including the harm / profit of the crime, and how difficult it could be to stop or defeat this type of crime.

It is very interesting to see which are the six scenarios considered having highest risk:

  • Audio/video impersonation
  • Driverless vehicles as weapons
  • Tailored phishing
  • Disrupting AI-controlled systems
  • Large-scale blackmail
  • AI-authored fake news.

More details can be found in the above mentioned paper.

Phishing in the Clouds

Again on Phishing, this time with a new twist.

We all know and by now use complex SaaS Cloud services, like Microsoft’s Office 365, Google’s G Suite, Amazon services and so on. They are all very modular, meaning that there are multiple data storage services and multiple application services from which to choose and use. Often a user must authorise a Cloud App to access her/his own data, and the App can be also by an external provider (a “partner” of the service). The Authorisation is usually implemented with OAuth which, in a few words, is a secure delegation access protocol based on the exchange of cryptographic keys.

So what is the scam? Simple: you receive an email which looks like coming from [name your Cloud provider here] and asks you to authorise an App (which looks authentic) to access your data. You do not need to insert any password since you are already logged-in your Cloud service platform, but just to click on the button, and that’s it!

You have given access to all your Cloud data and services to a fraudster, who can get your data and act as you!

For more details read for example this article by ArsTechnica.

Always about Security Patching and Updates

These days I keep coming back to the “security patching and updates” issue. So I am going to add another couple of comments.

The first is about Ripple 20 (here the official link but the news is already wide spread) which carries an impressive number of “CVSS v3 base score 10.0” vulnerabilities. The question is again:

how can we secure all of these Million/Billion vulnerable devices since it seems very likely that security patching is not an option for most of them?

The second one is very hypothetical, that is in the “food for thought” class.

Assume, as some says, that in 2030 Quantum Computers will be powerful enough to break RSA and other asymmetrical cryptographic algorithms, and that at the same time (or just before) Post Quantum Cryptography will deliver us new secure algorithms to substitute RSA and friends. At first sight all looks ok: we will have just to do a lot of security patching/updating of servers, clients, applications, CA certificates, credit cards (hardware), telephone SIMs (hardware), security keys (hardware), Hardware Security Modules (HSM) and so on and on… But what about all those micro/embedded/IoT devices in which the current cryptographic algorithms are baked into? And all of those large devices (like aircrafts but also cars) which have been designed with cryptographic algorithms baked into them (no change possible)? We will probably have to choose between living dangerously or buy a new one. Or we could be forced to buy a new one, if the device will not be able to work anymore since its old algorithm will not be accepted by the rest of the world.

PS. Concerning Quantum Computers,  as far as I know nobody claims that a full Quantum Computer will be functioning by 2030, this is only the earliest possible estimate of arrival, but it could take much much longer, or even never!

PS. I deliberately do not want to consider the scenario in which full Quantum Computers are available and Post Quantum Cryptography is not.

Patching, Updating (again) and CA certificates

Well, it is not the first time I write a comment about the need of security patching and updating all software, or the problem of not doing so. This is even more problematic for IoT and embedded software.

I just read this article on The Register which describes a real trouble which starts to show itself. In a few words, it is approximately 20 years that we massively use CA certificates to establish encrypted (SSL/TLS) connections. Clients software authenticates servers by trusting server certificates signed by a list of known Certification Authorities. Each client software has installed, usually in libraries, the public root certificates of the CA.

Now what happens if the root certificate of a CA expires?

This is not a problem for servers, their certificates are renewed periodically and more recent server certificates are signed by the new root CA.

Clients have some time, typically a few years, to acquire the new CA root certificates, this because root certificates last many years, usually 20. But what if  IoT or embedded devices never get an update? Think about cameras, smart-televisions, refrigerators, light bulbs and any other kind of gadget which connects to an Internet service. As the old root CA certificate expires, they cannot connect to the server and they can just stop working. The only possible way out is to manually update the software, if a software update is available and if an update procedure is possible. Otherwise, it remains only to buy a new device!

Security and Cryptography do not always go Hand in Hand with Availability

I have been reading this article by Ars Technica on smartphones’ 2FA Authenticator Apps and put it together with some thoughts about Hardware Security Keys (implementing the U2F [CTAP1] standard by the FIDO Alliance). In both scenarios, the security of the Second Factor Authentication (2FA) is based on the confidentiality of a cryptographic private key in the first case hold securely in the smartphone and in the second case in the USB hardware key. In other words, the private key cannot be read, copied or transferred to any other device, as it is also for any smart card (or integrated circuit card, ICC) or any Hardware Security Module (HSM).

Good for Security, but what about Availability?

Consider this very simple (and secure) scenario: my smartphone with a 2FA Authenticator App (or alternatively my U2F USB key) utterly breaks down: I have no way, at least given usual economical and time constraints, to recover it.

Now what about access to all those services which I set up in a very secure way with 2FA?

There is no access and no way immediately available to recover it! (But see below for more complete scenarios.)

The only possibility is to get a new hardware and perform a usually manual and lengthy process to regain access to each account and to configure the new hardware 2FA.

So Availability is lost and it could take quite some time and efforts to regain it.

The article by Ars Technica mentioned above, describes how some 2FA Authenticator Apps provide a solution to this problem: the idea is to install the App on multiple devices and securely copy the secret private key on each one of them. But now the secret private key, even if encrypted itself, is not anymore “hardware protected” since it is possible to copy it and transfer it. So Security/Confidentiality goes down but Availability goes up.

This process is obviously impossible as such with U2F hardware keys. In this case an alternative is to have multiple U2F hardware keys registered with all accounts and services so that if one key breaks one can always use a backup key to access the accounts and services. (For more info, see for example this support page by Google.)

On the Latest Bluetooth Impersonation Attack

Details on a new attack on Bluetooth have just been released (see here for its website). From what I understand it is based on two weaknesses of the protocol itself.

A quick description seems to be the following (correct me if I have misunderstood something).

When two Bluetooth devices (Alice and Bob) pair, they establish a common secret key mutually authenticating each other. The secret common key is kept by both Alice and Bob to authenticate each other in all future connections. Up to here all is ok.

Now it is important to notice the following points when Alice and Bob establish a new connection after pairing:

  • the connection can be established using a “Legacy Secure Connection” (LSC, less secure) or a “Secure Connection” (SC, secure), and either Alice or Bob can request to use LSC;
  • one of the devices acts as Master and the other as Slave, a connection can be closed and restarted and either Alice or Bob can request to act as Master;
  • in a “Legacy Secure Connection” the Slave must prove to the Master that it has the common secret key, but it is not requested that the Master proves to the Slave that it also has the common secret key (Authentication weakness);
  • in a “Secure Connection” either Alice or Bob can close the connection and restart it as a “Legacy Secure Connection” (Downgrade weakness).

Now Charlie wants to intercept the Bluetooth connection between Alice and Bob: first he listens to their connection and learns their Bluetooth addresses (which are public). Then Charlie jams the connection between Alice and Bob and connects as a Master to Alice using LSC and Bob’s Bluetooth address, and connects as a Master to Bob using LSC and Alice’s Bluetooth address. Since Charlie is Master both with respect to Alice and to Bob and since he can always downgrade the connection to LSC, he does not have to prove to neither Alice or Bob that he knows their common secret key. In this way Charlie is able to perform a MitM attack on the Bluetooth connection between Alice and Bob (obviously this description is very high level, I sketched just an idea of what is going on).

The bad point about this is that it is a weakness of the protocol, so all existing Bluetooth implementations are subject to it. The good point is that the fix should not be too difficult, except for the fact that many (too many) devices cannot be patched! Fortunately this attack seems not to apply to Bluetooth LE, but still I expect that most Bluetooth devices subject to this attack will never be patched.

But we should also consider the real impact of this attack: to perform it, the attacker Charlie should be physically near enough to the two devices (Alice and Bob) with a dedicate hardware (even if not so expensive), so this limits the possible implementations. Moreover this attack can have important consequences if Bluetooth is used to transfer very sensitive information or for critical applications (for example in the health sector). In other words, I would not worry when using Bluetooth to listen to music from my smartphone but I would worry if Bluetooth is used in some mission critical applications.