Securing the AI software development tools: IDEsaster

I just quote two MaccariTA (Ari Marzouk) statements:

IDEs were not initially built with AI agents in mind. Adding AI components to existing applications create new attack vectors, change the attack surface and reshape the threat model. This leads to new unpredictable risks.

[…]

AI IDEs effectively ignored the base IDE software as part of the threat model, assuming it’s inherently safe because it existed for years. However, once you add AI agents that can act autonomously, the same legacy features can be weaponized into data exfiltration and RCE primitives.

and, as you can read in the original post, it is not only “risks” but also a quite long list of vulnerabilities (with 24 CVE) which affect, one way or or another, almost all AI IDE tools.

The issue is one we saw many times in the past: first features and functionalities, then we fix security. I agree that without features and functionalities any software product does not make any sense, but with security as a post add-on, there is the well known risk to have to pay a large security bill for a long time.

AI and Security Bug Bounty

This is not an AI problem, it is a Human problem.

Security Bug Bounty rewards those who find a security bug in an application. But what if I ask an AI chatbot to produce a report of a “new” vulnerability in an application and then send it to the application maintainer hoping to get the reward?

Actually, it seems that this has been going on for some time, see here for example, and is starting to overwhelm application maintainers.

AI tools can be very helpful in analyzing and discovering security vulnerabilities in applications, but they must be used as one of the tools in the security practitioner toolbox.

Image Recognition and Advanced Driving Assistance

Securing the Perception of Advanced Driving Assistance Systems Against Digital Epileptic Seizures Resulting from Emergency Vehicle Lighting” is an interesting research study on the current status of image recognition for advanced driving assistance and autonomous vehicle systems. The study found that some standard Driving Assistance Systems can be completely confused by emergency vehicle flashers with the risk of becoming the cause of serious incidents. Machine Learning models can be part of the cause of this vulnerability, as well as part of the solution proposed by the researchers called “Caracetamol“.

Passwords Requirements in the new NIST SP 800-63 Digital Identity Guidelines

NIST has just opened a Call for Comments on the Second Public Draft of Revision 4 of NIST SP 800-63 “Digital Identity Guidelines”. It is quite interesting to read the proposed changes to password requirements in section 3.1.1 and Appendix A, such as

  • Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
  • Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
  • When processing a request to establish or change a password, verifiers SHALL compare the prospective secret against a blocklist that contains known commonly used, expected, or compromised passwords.
  • Verifiers SHALL allow the use of password managers. Verifiers SHOULD permit claimants to use the “paste” functionality when entering a password to facilitate their use.

Appendix A makes it clear that the purpose of the new requirements is twofold: make it easier for users to manage passwords and at the same time have users create reasonably secure passwords against relevant attacks.

With the adoption of Single Sign On, Federation, Security Keys etc., the scenario concerning password management (and the future final password dismissal) is rapidly changing. However, passwords are still today a key security risk but any change that goes in the direction of easier and safer users’ password management is very welcomed.

A Roadmap to Enhancing Internet Routing Security

The White House just published a roadmap to improving Internet routing security (here the announcement and the document, here a news comment).

The US government is pushing for the adoption of the Resource Public Key Infrastructure (RPKI) protocol. Interesting to notice that currently Europe is ahead in its adoption, approximately 70 per cent of BGP routes are protected by RPKI in Europe, with respect to 39% in the US.

A Quantum Computers’ Status Update

On IEEE Spectrum I found quite interesting this article by IBM researchers about the current status and possible future developments of Quantum Computers. Even if there is no direct mention of “breaking RSA” (but Shor algorithm is mentioned), it is worth considering alongside the recent NIST announcement of the first 3 Post Quantum Encryption Standards (here and here).

The first World Quantum Readiness Day, September 26, 2024

Somehow I missed the announcement of DigiCert organizing the first “World Quantum Readiness Day“, see here and here. The purpose of this initiative is to help organizations prepare for the (future) arrival of Quantum Computers: to evaluate the risks, the opportunities and adopt measures to mitigate the first and take advantage of the second.

Cryptanalysis, Hard Lattice Problems and Post Quantum Cryptography

Cryptanalysis is the study of weaknesses of cryptographic algorithms and it is essential to prove that a cryptographic algorithm is safe. Assuming that in the near or distant future, Quantum Computers will arrive, cryptographic algorithms will need to be safe against standard and quantum cryptanalysis.

Quantum cryptanalysis is the study of weaknesses of cryptographic algorithms which can be exploited only by algorithms running on Quantum Computers. Shore’s algorithm is the most important quantum algorithm because it will break algorithms such as RSA and most of our current IT security.

Post Quantum algorithms are mostly based on hard lattice problems which are safe against the Shor algorithm and thus are safe also in case a full Quantum Computer will be available. But research keeps advancing in the study of quantum cryptanalysis, as shown by this recent paper which, luckily for us, contained an error that invalidated the proof. As commented by Bruce Schneier here, quantum cryptanalysis has still a long way ahead to become a real threat since not only the proof should be formally correct and applicable to the real post-quantum algorithms instead of reduced models, but it should actually be able to run on a Quantum Computer, whenever it will be available.

Latest AI Models can Autonomously Hack Websites

This research article is quite interesting and at the same time scary. It shows how the latest Large Language Models (LLMs) could be used to autonomously attack and hack Internet websites without human feedback or support.

The study shows that an AI model which

  1. can reach websites in Internet through tools and/or API
  2. can use the response of the websites as an input to itself to plan further actions
  3. can read documents provided a priori by humans as a support library of possible use

has in principle (and for GPT4, in practice) the capability to interact with the target website, identify vulnerabilities like SQL Injection, XSS, etc., and build and perform a successful attack.

The study also shows that, as of today, almost all AI models lack the three features to the maturity level required. Nonetheless, with the current speed of development of AI models, these features will become standard in very little time.

Due to the (future) ease and low cost of employing an AI model to hack a website, AI service providers face the critical task of preventing this type of abuse of their services, but owners of websites will need anyway to improve their security since sooner or later “AI hacking as a service” offerings will appear.