Interesting article on “7 Revealing Ways AIs Fail”

# Tag Archives: Machine Learning

# Brief on AI/ML

# Mixed Results on AI/ML from Google

Artificial Intelligence, or better Machine Learning, is increasingly becoming part of everyday IT, but it is still unclear (at least to me) which are its real potentials, limits, risks etc.

For example, very recently there have been 2 somehow contradictory news from Google/Alphabet funded research in AI/ML:

- the paper “
*Underspecification Presents Challenges for Credibility in Modern Machine Learning*” (here the full paper) studies some possible reasons why “*ML models often exhibit unexpectedly poor behaviour when they are deployed in real-world domains*“, but it is unclear (at least to me) how much these*Challenges*can be overcome; - CASP has announced (here the announcement) that DeepMind AlphaFold has practically solved the problem of predicting how proteins fold, which is fundamental to find cures to almost all diseases, including cancer, dementia and even infectious diseases such as COVID-19.

# A New Theoretical Result on the Learnability of Machine Learning (AI)

Theoretical mathematical results have often little immediate practical application and in some cases initially can seem obvious. Still they usually are not obvious as such since it is quite different to imagine that a result holds true, and to prove it mathematically in a rigorous way. Moreover such a proof often helps explaining the reasons of the result and its possible applications.

Very recently a theoretical (mathematical) results in Machine Learning (the current main version of Artificial Intelligence) has been announced: the paper can be found in Nature here and a comment here .

Learnability can be defined as the ability to make predictions about a large data set by sampling a small number of data points. This is what usually Machine Learning does. The mathematical result is that, in general, this problem is ‘undecidable’, that is it is impossible to prove that it always exists a limited sampling set which allows to ‘learn’ (for example to always recognise a cat in an image from a sample of a limited number of cat’s images). Mathematicians have proven that Learnability is related to fundamental mathematical problems going back to Cantor’s set theory, the work of Gödel and Alan Turing, and it is related to the theory of compressibility of information.

This result poses some theoretical limits on what Machine Learning can ever achieve, even if it does not seem to have any immediate practical consequence.