“Securing the Perception of Advanced Driving Assistance Systems Against Digital Epileptic Seizures Resulting from Emergency Vehicle Lighting” is an interesting research study on the current status of image recognition for advanced driving assistance and autonomous vehicle systems. The study found that some standard Driving Assistance Systems can be completely confused by emergency vehicle flashers with the risk of becoming the cause of serious incidents. Machine Learning models can be part of the cause of this vulnerability, as well as part of the solution proposed by the researchers called “Caracetamol“.
Tag Archives: Machine Learning
Is the “Turing Test” Dead?
This is a very good question in these times of Generative and Large Language Artificial Intelligence models, which some researchers answered in the affirmative, see here and here for their proposals to replace the Turing Test.
But… other researchers still believe in the Turing Test and applied it with somehow surprising results: Humans 63%, GPT-4 41%, ELIZA 27% and GPT-3.5 14%. We, humans, are still better than GPT-4, but the surprise is the third position by ELIZA, a chatbot from the ’60s, ahead of GPT-3.5 (see here and here).
AI Transparency not doing so well
Stanford University researchers just released a report in which a “Foundation Model Transparency Index” (here) is presented. The first evaluation did not go so well, since the highest score is 54 out of 100. Comments by reviewers and experts in the field point out that “transparency is on the decline while capability is going through the roof” as Stanford CRFM Director Percy Liang told Reuters in an interview (see also here).
Cheating People vs. Cheating AI Models
It seems that at least in one aspect current AI models, or better ML Foundation/Generative models, are quite similar to humans: fraudsters can always find ways of cheating them. One of the last examples is described here.
AI and the Extinction of the Human Race
The Center for AI Safety (here) has just published the following “Statement on AI Risk” (here):
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The list of signatures is impressive (just look at the first 4) and should make us think more deeply about us and AI & ML.
On Large Language Models (and AI Models) Explainability
Researchers at OpenAI have recently released a scientific paper (here) entitled “Language models can explain neurons in language models“. The paper is quite technical, but it is interesting to quote from the Introduction:
Language models have become more capable and more widely deployed, but we do not understand how they work. Recent work has made progress on understanding a small number of circuits and narrow behaviors, but to fully understand a language model, we’ll need to analyze millions of neurons. This paper applies automation to the problem of scaling an interpretability technique to all the neurons in a large language model. Our hope is that building on this approach of automating interpretability will enable us to comprehensively audit the safety of models before deployment.
and to read the concluding Discussion section.
On AI/ML Failures
Interesting article on “7 Revealing Ways AIs Fail”
Brief on AI/ML
Mixed Results on AI/ML from Google
Artificial Intelligence, or better Machine Learning, is increasingly becoming part of everyday IT, but it is still unclear (at least to me) which are its real potentials, limits, risks etc.
For example, very recently there have been 2 somehow contradictory news from Google/Alphabet funded research in AI/ML:
- the paper “Underspecification Presents Challenges for Credibility in Modern Machine Learning” (here the full paper) studies some possible reasons why “ML models often exhibit unexpectedly poor behaviour when they are deployed in real-world domains“, but it is unclear (at least to me) how much these Challenges can be overcome;
- CASP has announced (here the announcement) that DeepMind AlphaFold has practically solved the problem of predicting how proteins fold, which is fundamental to find cures to almost all diseases, including cancer, dementia and even infectious diseases such as COVID-19.
A New Theoretical Result on the Learnability of Machine Learning (AI)
Theoretical mathematical results have often little immediate practical application and in some cases initially can seem obvious. Still they usually are not obvious as such since it is quite different to imagine that a result holds true, and to prove it mathematically in a rigorous way. Moreover such a proof often helps explaining the reasons of the result and its possible applications.
Very recently a theoretical (mathematical) results in Machine Learning (the current main version of Artificial Intelligence) has been announced: the paper can be found in Nature here and a comment here .
Learnability can be defined as the ability to make predictions about a large data set by sampling a small number of data points. This is what usually Machine Learning does. The mathematical result is that, in general, this problem is ‘undecidable’, that is it is impossible to prove that it always exists a limited sampling set which allows to ‘learn’ (for example to always recognise a cat in an image from a sample of a limited number of cat’s images). Mathematicians have proven that Learnability is related to fundamental mathematical problems going back to Cantor’s set theory, the work of Gödel and Alan Turing, and it is related to the theory of compressibility of information.
This result poses some theoretical limits on what Machine Learning can ever achieve, even if it does not seem to have any immediate practical consequence.