It seems that at least in one aspect current AI models, or better ML Foundation/Generative models, are quite similar to humans: fraudsters can always find ways of cheating them. One of the last examples is described here.
Category Archives: AI/ML
Compliance of Foundation AI Model Providers with Draft EU AI Act
Interesting study by the Center for Reasearch on Foundation Models, Stanford University, Human-Centered Artificial Intelligence, on compliance of Foundation Model Providers, such as OpenAI, Google and Meta, with the Draft EU AI Act. Here is the link to the study, and the results indicate that the 10 providers analysed “largely do not” comply with the draft requirements of the EU AI Act.
AI and the Extinction of the Human Race
The Center for AI Safety (here) has just published the following “Statement on AI Risk” (here):
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The list of signatures is impressive (just look at the first 4) and should make us think more deeply about us and AI & ML.
On Large Language Models (and AI Models) Explainability
Researchers at OpenAI have recently released a scientific paper (here) entitled “Language models can explain neurons in language models“. The paper is quite technical, but it is interesting to quote from the Introduction:
Language models have become more capable and more widely deployed, but we do not understand how they work. Recent work has made progress on understanding a small number of circuits and narrow behaviors, but to fully understand a language model, we’ll need to analyze millions of neurons. This paper applies automation to the problem of scaling an interpretability technique to all the neurons in a large language model. Our hope is that building on this approach of automating interpretability will enable us to comprehensively audit the safety of models before deployment.
and to read the concluding Discussion section.