Cryptanalysis, Hard Lattice Problems and Post Quantum Cryptography

Cryptanalysis is the study of weaknesses of cryptographic algorithms and it is essential to prove that a cryptographic algorithm is safe. Assuming that in the near or distant future, Quantum Computers will arrive, cryptographic algorithms will need to be safe against standard and quantum cryptanalysis.

Quantum cryptanalysis is the study of weaknesses of cryptographic algorithms which can be exploited only by algorithms running on Quantum Computers. Shore’s algorithm is the most important quantum algorithm because it will break algorithms such as RSA and most of our current IT security.

Post Quantum algorithms are mostly based on hard lattice problems which are safe against the Shor algorithm and thus are safe also in case a full Quantum Computer will be available. But research keeps advancing in the study of quantum cryptanalysis, as shown by this recent paper which, luckily for us, contained an error that invalidated the proof. As commented by Bruce Schneier here, quantum cryptanalysis has still a long way ahead to become a real threat since not only the proof should be formally correct and applicable to the real post-quantum algorithms instead of reduced models, but it should actually be able to run on a Quantum Computer, whenever it will be available.

What can be Done with the Quantum Computers we can Build

This is an interesting article about how it will be possible to use Quantum Computers that realistically will be built in the next decade. The main areas seem to be solving quantum problems in chemistry, material science, and pharma. And there are prizes offered by Google and XPrize up to US $5 million to those who can find more practical applications of the Quantum Computers which will be available in the near future.

A “Morris” Worm for Generative Artificial Intelligence

Almost every day there is a new announcement about Artificial Intelligence and Security, and not all of them look good. The latest (here) describes how it is possible to create a worm that propagates between Generative Artificial Intelligence models. For (understandable) historical reasons, it has been named “Morris II”.

The approach seems simple: abusing the Retrieval-Augmented Generation (RAG) capabilities of these models (that is the capability of retrieving data from external authoritative, pre-determined knowledge sources) it is possible to propagate adversarial self-replicating prompts between different Gen-AI models. In other words, through external shared sources such as email, a Gen-AI model can propagate the worm to another model. Notice that the effect of the input data (prompt) to a Gen-AI model is to replicate that prompt in output so that it can be picked up by another Gen-AI model.

This is only a research study and the authors intend to raise this issue in order to prevent the real appearance of Morris II-type worms.

But all this only means that we have still a lot to learn and a lot to do to be able to create and use Artificial Intelligence securely.

Latest AI Models can Autonomously Hack Websites

This research article is quite interesting and at the same time scary. It shows how the latest Large Language Models (LLMs) could be used to autonomously attack and hack Internet websites without human feedback or support.

The study shows that an AI model which

  1. can reach websites in Internet through tools and/or API
  2. can use the response of the websites as an input to itself to plan further actions
  3. can read documents provided a priori by humans as a support library of possible use

has in principle (and for GPT4, in practice) the capability to interact with the target website, identify vulnerabilities like SQL Injection, XSS, etc., and build and perform a successful attack.

The study also shows that, as of today, almost all AI models lack the three features to the maturity level required. Nonetheless, with the current speed of development of AI models, these features will become standard in very little time.

Due to the (future) ease and low cost of employing an AI model to hack a website, AI service providers face the critical task of preventing this type of abuse of their services, but owners of websites will need anyway to improve their security since sooner or later “AI hacking as a service” offerings will appear.

Hardware Based Fully Homomorphic Encryption

Slowly but Fully Homomorphic Encryption (FHE) improves. Actually this is a dream for all: service providers (SaaS) would not need to worry about the confidentiality of their clients’ information, and clients about the risk of having confidential information processed by a third party.

In a few words, FHE provides computing on encrypted data so that the result of the computation is obtained once data is decrypted. This was mostly an idea until 2009 when Craig Gentry described the first plausible construction for a fully homomorphic encryption scheme. But the major problem of all the proposed FHE schemes is that they are extremely slow and resource intensive. But this year (see here for example) new chips should arrive on the market which implement in hardware the critical operations of FHE computations, speeding them up many times. Still, this is just another step forward to a practical FHE, there is still a long way to go, but we are getting closer.

A New Open Source Competitor in the Large Language AI Models Arena

This Chinese Startup Is Winning the Open Source AI Race” is an interesting article from Wired on Yi-34B, from Chinese AI Startup 01.AI, which is currently leading many leaderboards comparing the power of AI models. Moreover, together with Meta’s Llama 2 from which it borrows part of its architecture, Yi-34B is one of the few top LLM to be Open Source. Yi-34B adopts a new approach to model training which seems better than what used by many competitors and possibly part of the reason of its current success.

A lot has changed in the AI arena in the last couple of years, and one notable fact is that most of the leading models now are Closed Source. Possible advantages of being Open Source are that it is easier to make external contributions to the model’s development (mostly from university researchers), and that there should be a lower barrier to build an “app” ecosystem around it.

On Deceptive Large Language AI Models

Interesting research article about how to remove backdoors and deceptive behaviour from Large Language AI models (LLM). The simplest example is a model trained to write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. The result of the study is that it can be very difficult to remove such behaviour using current standard safety training techniques. Deceptive behaviour can be introduced intentionally in the LLM during the training but it could also happen by poor training. Applying current techniques to identify and remove backdoors, such as Adversarial training, can actually fail and end up providing a false sense of security. Another result of the study is that the larger LLM seem more prone to be “Deceptive”.

Writing (in-) Secure Code with AI Assistance

This is an interesting research article on the security of code written with AI Assistance; the large-scale user study shows that code written with an AI Assistant is usually less secure, that is contains more vulnerabilities, than code written without AI support.

Thus, at least as of today, relying on an AI Assistant to write better and more secure code could work out badly. But AI is changing very rapidly, soon it could learn math and to write secure and super efficient code. We’ll see…

Is Quantum Computing Harder than Expected?

This is a quite interesting article on Quantum Computing and how hard it really is.

It is well known that Quantum Computers are prone to Quantum Errors, and this issue grows with the number of Qubits. The typical estimate is that an useful Quantum Computer would need approx. 1.000 physical Qubits to correct the Quantum Errors of a single “logical” Qubit. Even if there are advancements in this topic (see for example this post), this is still a problem to be solved in practice.

Another potential issue is that Quantum Computers have been proposed to efficiently solve many problems including optimization, fluid dynamics etc. besides those problems for which a Quantum Computer would provide exponential speed-up, such as factoring large numbers and simulating quantum systems. But if a Quantum Computer does not provide an exponential speed-up in solving a problem, there is the possibility that actually it would be slower than a current “classical” computer.

But the big question remains: will a real useful Quantum Computer arrive soon? If yes, how soon?

Is the “Turing Test” Dead?

This is a very good question in these times of Generative and Large Language Artificial Intelligence models, which some researchers answered in the affirmative, see here and here for their proposals to replace the Turing Test.

But… other researchers still believe in the Turing Test and applied it with somehow surprising results: Humans 63%, GPT-4 41%, ELIZA 27% and GPT-3.5  14%. We, humans, are still better than GPT-4, but the surprise is the third position by ELIZA, a chatbot from the ’60s, ahead of GPT-3.5 (see here and here).