A recent study (here the paper, and here some comments) shows that the latest LLMs models, though are getting good in mathematical computations, still lack mathematical reasoning, that is the ability to provide a detailed and exact proof of a mathematical statement with rigorous reasoning (unless they have been already trained with the proof or have access to it). The researchers evaluated some of the top LLMs on the six problems from the 2025 USA Math Olympiad just hours after their release, assuring in this way that the detailed solutions were not known to the LLMs.