A “Morris” Worm for Generative Artificial Intelligence

Almost every day there is a new announcement about Artificial Intelligence and Security, and not all of them look good. The latest (here) describes how it is possible to create a worm that propagates between Generative Artificial Intelligence models. For (understandable) historical reasons, it has been named “Morris II”.

The approach seems simple: abusing the Retrieval-Augmented Generation (RAG) capabilities of these models (that is the capability of retrieving data from external authoritative, pre-determined knowledge sources) it is possible to propagate adversarial self-replicating prompts between different Gen-AI models. In other words, through external shared sources such as email, a Gen-AI model can propagate the worm to another model. Notice that the effect of the input data (prompt) to a Gen-AI model is to replicate that prompt in output so that it can be picked up by another Gen-AI model.

This is only a research study and the authors intend to raise this issue in order to prevent the real appearance of Morris II-type worms.

But all this only means that we have still a lot to learn and a lot to do to be able to create and use Artificial Intelligence securely.

Is the “Turing Test” Dead?

This is a very good question in these times of Generative and Large Language Artificial Intelligence models, which some researchers answered in the affirmative, see here and here for their proposals to replace the Turing Test.

But… other researchers still believe in the Turing Test and applied it with somehow surprising results: Humans 63%, GPT-4 41%, ELIZA 27% and GPT-3.5  14%. We, humans, are still better than GPT-4, but the surprise is the third position by ELIZA, a chatbot from the ’60s, ahead of GPT-3.5 (see here and here).