Intelligenza Artificiale – Un approccio alla gestione dei rischi per le aziende

E’ stato appena pubblicato nel rapporto Clusit 2023 uscito oggi (14 marzo) un contributo scritto con Tamara Devalle dal titolo

Intelligenza Artificiale – Un approccio alla gestione dei rischi per le aziende

Il Rapporto Clusit 2023 può essere scaricato in pdf qui

Mixed Results on AI/ML from Google

Artificial Intelligence, or better Machine Learning, is increasingly becoming part of everyday IT, but it is still unclear (at least to me) which are its real potentials, limits, risks etc.

For example, very recently there have been 2 somehow contradictory news from Google/Alphabet funded research in AI/ML:

  • the paper “Underspecification Presents Challenges for Credibility in Modern Machine Learning” (here the full paper) studies some possible reasons why “ML models often exhibit unexpectedly poor behaviour when they are deployed in real-world domains“, but it is unclear (at least to me) how much these Challenges can be overcome;
  • CASP has announced (here the announcement) that DeepMind AlphaFold has practically solved the problem of predicting how proteins fold, which is fundamental to find cures to almost all diseases, including cancer, dementia and even infectious diseases such as COVID-19.

Subject Matter Experts On Artificial Intelligence and IT Crime

Artificial Intelligence (AI), in all its different fields from Machine Learning to Generative Adversarial Networks, has been subject to a study (here the link to the paper), or probably better an evaluation, by a group of Subject Matter Experts (SMEs) to identify the most risky scenarios in which attackers could use it, abuse it or defeat it. The scenarios include cases in which AI is used for security purposes and an attacker is able to defeat it, or AI is used for other purposes and an attacker is able to abuse it to commit a crime, or an attacker uses AI to build a tool to commit a crime.

Overall the SMEs have identified 20 high level scenarios and ranked them by multiple criteria including the harm / profit of the crime, and how difficult it could be to stop or defeat this type of crime.

It is very interesting to see which are the six scenarios considered having highest risk:

  • Audio/video impersonation
  • Driverless vehicles as weapons
  • Tailored phishing
  • Disrupting AI-controlled systems
  • Large-scale blackmail
  • AI-authored fake news.

More details can be found in the above mentioned paper.

A New Theoretical Result on the Learnability of Machine Learning (AI)

Theoretical mathematical results have often little immediate practical application and in some cases initially can seem obvious. Still they usually are not obvious as such since it is quite different to imagine that a result holds true, and to prove it mathematically in a rigorous way. Moreover such a proof often helps explaining the reasons of the result and its possible applications.

Very recently a theoretical (mathematical) results in Machine Learning (the current main version of Artificial Intelligence) has been announced: the paper can be found in Nature here and a comment here .

Learnability can be defined as the ability to make predictions about a large data set by sampling a small number of data points. This is what usually Machine Learning does. The mathematical result is that, in general, this problem is ‘undecidable’, that is it is impossible to prove that it always exists a limited sampling set which allows to ‘learn’ (for example to always recognise a cat in an image from a sample of a limited number of cat’s images). Mathematicians have proven that Learnability is related to fundamental mathematical problems going back to Cantor’s set theory, the work of Gödel and Alan Turing, and it is related to the theory of compressibility of information.

This result poses some theoretical limits on what Machine Learning can ever achieve, even if it does not seem to have any immediate practical consequence.