Theoretical mathematical results have often little immediate practical application and in some cases initially can seem obvious. Still they usually are not obvious as such since it is quite different to imagine that a result holds true, and to prove it mathematically in a rigorous way. Moreover such a proof often helps explaining the reasons of the result and its possible applications.
Very recently a theoretical (mathematical) results in Machine Learning (the current main version of Artificial Intelligence) has been announced: the paper can be found in Nature here and a comment here .
Learnability can be defined as the ability to make predictions about a large data set by sampling a small number of data points. This is what usually Machine Learning does. The mathematical result is that, in general, this problem is ‘undecidable’, that is it is impossible to prove that it always exists a limited sampling set which allows to ‘learn’ (for example to always recognise a cat in an image from a sample of a limited number of cat’s images). Mathematicians have proven that Learnability is related to fundamental mathematical problems going back to Cantor’s set theory, the work of Gödel and Alan Turing, and it is related to the theory of compressibility of information.
This result poses some theoretical limits on what Machine Learning can ever achieve, even if it does not seem to have any immediate practical consequence.