Stanford University researchers just released a report in which a “Foundation Model Transparency Index” (here) is presented. The first evaluation did not go so well, since the highest score is 54 out of 100. Comments by reviewers and experts in the field point out that “transparency is on the decline while capability is going through the roof” as Stanford CRFM Director Percy Liang told Reuters in an interview (see also here).
Cheating People vs. Cheating AI Models
It seems that at least in one aspect current AI models, or better ML Foundation/Generative models, are quite similar to humans: fraudsters can always find ways of cheating them. One of the last examples is described here.
Quantum Computers are getting Smaller
Quantum Computers are developing fast, but up to now they have been quite bulky, to say the least.
But now Quantum Computers are appearing on the market which fit in standard server cabinets and can be deployed in typical datacenter rooms, see for example here.
So the rush to Quantum Computing is still going on…
Post Quantum Cryptography on the Rise
Recently there have been quite a few announcements about the adoption of Post Quantum Cryptography algorithms to supplement, not yet substitute, current algorithms so that encryption will be able to withstand also when (and if) Quantum Computers will arrive. Latest is the announcement by Signal, as it has been reported for example here.
Compliance of Foundation AI Model Providers with Draft EU AI Act
Interesting study by the Center for Reasearch on Foundation Models, Stanford University, Human-Centered Artificial Intelligence, on compliance of Foundation Model Providers, such as OpenAI, Google and Meta, with the Draft EU AI Act. Here is the link to the study, and the results indicate that the 10 providers analysed “largely do not” comply with the draft requirements of the EU AI Act.
AI and the Extinction of the Human Race
The Center for AI Safety (here) has just published the following “Statement on AI Risk” (here):
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The list of signatures is impressive (just look at the first 4) and should make us think more deeply about us and AI & ML.
On Large Language Models (and AI Models) Explainability
Researchers at OpenAI have recently released a scientific paper (here) entitled “Language models can explain neurons in language models“. The paper is quite technical, but it is interesting to quote from the Introduction:
Language models have become more capable and more widely deployed, but we do not understand how they work. Recent work has made progress on understanding a small number of circuits and narrow behaviors, but to fully understand a language model, we’ll need to analyze millions of neurons. This paper applies automation to the problem of scaling an interpretability technique to all the neurons in a large language model. Our hope is that building on this approach of automating interpretability will enable us to comprehensively audit the safety of models before deployment.
and to read the concluding Discussion section.
Windows code and Rust
Quite an interesting news (here for example): Microsoft is rewriting core Windows libraries in the Rust programming language instead of C/C++. Rust is a programming language which is “memory safe” which means that prevents entire classes of bugs which can lead to vulnerabilities and exploitations.
Do not expect that all Windows will be rewritten in Rust, it would be an enormous task and probably it will make little practical and technical sense. But rewriting key components of the operating system in a security minded programming language is for sure a great security step forward.
Intelligenza Artificiale – Un approccio alla gestione dei rischi per le aziende
E’ stato appena pubblicato nel rapporto Clusit 2023 uscito oggi (14 marzo) un contributo scritto con Tamara Devalle dal titolo
“Intelligenza Artificiale – Un approccio alla gestione dei rischi per le aziende“
Il Rapporto Clusit 2023 può essere scaricato in pdf qui
Sicurezza Informatica – Spunti ed Approfondimenti
Ho appena pubblicato una raccolta di miei articoli sulla Sicurezza Informatica dal titolo
“Sicurezza Informatica – Spunti ed Approfondimenti“
liberamente scaricabile in pdf a questa pagina mentre le versioni ebook e cartacea sono disponibili su Amazon.it