This article studies the reliability of increasingly larger LLM models (such as GPT, LLaMA, etc.) with respect to their correctness and ability to solve more complex problems. A priori it would seem that more powerful, larger, and “better” trained models would improve and become more reliable. The study instead shows that it doesn’t really seem so: even if the models become better at solving more complex problems as they grow, they also become less reliable, that is they make more mistakes.
Author Archives: Andrea Pasquinucci
Passwords Requirements in the new NIST SP 800-63 Digital Identity Guidelines
NIST has just opened a Call for Comments on the Second Public Draft of Revision 4 of NIST SP 800-63 “Digital Identity Guidelines”. It is quite interesting to read the proposed changes to password requirements in section 3.1.1 and Appendix A, such as
- Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
- Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
- When processing a request to establish or change a password, verifiers SHALL compare the prospective secret against a blocklist that contains known commonly used, expected, or compromised passwords.
- Verifiers SHALL allow the use of password managers. Verifiers SHOULD permit claimants to use the “paste” functionality when entering a password to facilitate their use.
Appendix A makes it clear that the purpose of the new requirements is twofold: make it easier for users to manage passwords and at the same time have users create reasonably secure passwords against relevant attacks.
With the adoption of Single Sign On, Federation, Security Keys etc., the scenario concerning password management (and the future final password dismissal) is rapidly changing. However, passwords are still today a key security risk but any change that goes in the direction of easier and safer users’ password management is very welcomed.
A Roadmap to Enhancing Internet Routing Security
The White House just published a roadmap to improving Internet routing security (here the announcement and the document, here a news comment).
The US government is pushing for the adoption of the Resource Public Key Infrastructure (RPKI) protocol. Interesting to notice that currently Europe is ahead in its adoption, approximately 70 per cent of BGP routes are protected by RPKI in Europe, with respect to 39% in the US.
A Quantum Computers’ Status Update
On IEEE Spectrum I found quite interesting this article by IBM researchers about the current status and possible future developments of Quantum Computers. Even if there is no direct mention of “breaking RSA” (but Shor algorithm is mentioned), it is worth considering alongside the recent NIST announcement of the first 3 Post Quantum Encryption Standards (here and here).
The first World Quantum Readiness Day, September 26, 2024
Somehow I missed the announcement of DigiCert organizing the first “World Quantum Readiness Day“, see here and here. The purpose of this initiative is to help organizations prepare for the (future) arrival of Quantum Computers: to evaluate the risks, the opportunities and adopt measures to mitigate the first and take advantage of the second.
Cryptanalysis, Hard Lattice Problems and Post Quantum Cryptography
Cryptanalysis is the study of weaknesses of cryptographic algorithms and it is essential to prove that a cryptographic algorithm is safe. Assuming that in the near or distant future, Quantum Computers will arrive, cryptographic algorithms will need to be safe against standard and quantum cryptanalysis.
Quantum cryptanalysis is the study of weaknesses of cryptographic algorithms which can be exploited only by algorithms running on Quantum Computers. Shore’s algorithm is the most important quantum algorithm because it will break algorithms such as RSA and most of our current IT security.
Post Quantum algorithms are mostly based on hard lattice problems which are safe against the Shor algorithm and thus are safe also in case a full Quantum Computer will be available. But research keeps advancing in the study of quantum cryptanalysis, as shown by this recent paper which, luckily for us, contained an error that invalidated the proof. As commented by Bruce Schneier here, quantum cryptanalysis has still a long way ahead to become a real threat since not only the proof should be formally correct and applicable to the real post-quantum algorithms instead of reduced models, but it should actually be able to run on a Quantum Computer, whenever it will be available.
What can be Done with the Quantum Computers we can Build
This is an interesting article about how it will be possible to use Quantum Computers that realistically will be built in the next decade. The main areas seem to be solving quantum problems in chemistry, material science, and pharma. And there are prizes offered by Google and XPrize up to US $5 million to those who can find more practical applications of the Quantum Computers which will be available in the near future.
A “Morris” Worm for Generative Artificial Intelligence
Almost every day there is a new announcement about Artificial Intelligence and Security, and not all of them look good. The latest (here) describes how it is possible to create a worm that propagates between Generative Artificial Intelligence models. For (understandable) historical reasons, it has been named “Morris II”.
The approach seems simple: abusing the Retrieval-Augmented Generation (RAG) capabilities of these models (that is the capability of retrieving data from external authoritative, pre-determined knowledge sources) it is possible to propagate adversarial self-replicating prompts between different Gen-AI models. In other words, through external shared sources such as email, a Gen-AI model can propagate the worm to another model. Notice that the effect of the input data (prompt) to a Gen-AI model is to replicate that prompt in output so that it can be picked up by another Gen-AI model.
This is only a research study and the authors intend to raise this issue in order to prevent the real appearance of Morris II-type worms.
But all this only means that we have still a lot to learn and a lot to do to be able to create and use Artificial Intelligence securely.
Latest AI Models can Autonomously Hack Websites
This research article is quite interesting and at the same time scary. It shows how the latest Large Language Models (LLMs) could be used to autonomously attack and hack Internet websites without human feedback or support.
The study shows that an AI model which
- can reach websites in Internet through tools and/or API
- can use the response of the websites as an input to itself to plan further actions
- can read documents provided a priori by humans as a support library of possible use
has in principle (and for GPT4, in practice) the capability to interact with the target website, identify vulnerabilities like SQL Injection, XSS, etc., and build and perform a successful attack.
The study also shows that, as of today, almost all AI models lack the three features to the maturity level required. Nonetheless, with the current speed of development of AI models, these features will become standard in very little time.
Due to the (future) ease and low cost of employing an AI model to hack a website, AI service providers face the critical task of preventing this type of abuse of their services, but owners of websites will need anyway to improve their security since sooner or later “AI hacking as a service” offerings will appear.
Hardware Based Fully Homomorphic Encryption
Slowly but Fully Homomorphic Encryption (FHE) improves. Actually this is a dream for all: service providers (SaaS) would not need to worry about the confidentiality of their clients’ information, and clients about the risk of having confidential information processed by a third party.
In a few words, FHE provides computing on encrypted data so that the result of the computation is obtained once data is decrypted. This was mostly an idea until 2009 when Craig Gentry described the first plausible construction for a fully homomorphic encryption scheme. But the major problem of all the proposed FHE schemes is that they are extremely slow and resource intensive. But this year (see here for example) new chips should arrive on the market which implement in hardware the critical operations of FHE computations, speeding them up many times. Still, this is just another step forward to a practical FHE, there is still a long way to go, but we are getting closer.