It seems that at least in one aspect current AI models, or better ML Foundation/Generative models, are quite similar to humans: fraudsters can always find ways of cheating them. One of the last examples is described here.
Author Archives: Andrea Pasquinucci
Quantum Computers are getting Smaller
Quantum Computers are developing fast, but up to now they have been quite bulky, to say the least.
But now Quantum Computers are appearing on the market which fit in standard server cabinets and can be deployed in typical datacenter rooms, see for example here.
So the rush to Quantum Computing is still going on…
Post Quantum Cryptography on the Rise
Recently there have been quite a few announcements about the adoption of Post Quantum Cryptography algorithms to supplement, not yet substitute, current algorithms so that encryption will be able to withstand also when (and if) Quantum Computers will arrive. Latest is the announcement by Signal, as it has been reported for example here.
Compliance of Foundation AI Model Providers with Draft EU AI Act
Interesting study by the Center for Reasearch on Foundation Models, Stanford University, Human-Centered Artificial Intelligence, on compliance of Foundation Model Providers, such as OpenAI, Google and Meta, with the Draft EU AI Act. Here is the link to the study, and the results indicate that the 10 providers analysed “largely do not” comply with the draft requirements of the EU AI Act.
AI and the Extinction of the Human Race
The Center for AI Safety (here) has just published the following “Statement on AI Risk” (here):
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The list of signatures is impressive (just look at the first 4) and should make us think more deeply about us and AI & ML.
On Large Language Models (and AI Models) Explainability
Researchers at OpenAI have recently released a scientific paper (here) entitled “Language models can explain neurons in language models“. The paper is quite technical, but it is interesting to quote from the Introduction:
Language models have become more capable and more widely deployed, but we do not understand how they work. Recent work has made progress on understanding a small number of circuits and narrow behaviors, but to fully understand a language model, we’ll need to analyze millions of neurons. This paper applies automation to the problem of scaling an interpretability technique to all the neurons in a large language model. Our hope is that building on this approach of automating interpretability will enable us to comprehensively audit the safety of models before deployment.
and to read the concluding Discussion section.
Windows code and Rust
Quite an interesting news (here for example): Microsoft is rewriting core Windows libraries in the Rust programming language instead of C/C++. Rust is a programming language which is “memory safe” which means that prevents entire classes of bugs which can lead to vulnerabilities and exploitations.
Do not expect that all Windows will be rewritten in Rust, it would be an enormous task and probably it will make little practical and technical sense. But rewriting key components of the operating system in a security minded programming language is for sure a great security step forward.
Intelligenza Artificiale – Un approccio alla gestione dei rischi per le aziende
E’ stato appena pubblicato nel rapporto Clusit 2023 uscito oggi (14 marzo) un contributo scritto con Tamara Devalle dal titolo
“Intelligenza Artificiale – Un approccio alla gestione dei rischi per le aziende“
Il Rapporto Clusit 2023 può essere scaricato in pdf qui
Sicurezza Informatica – Spunti ed Approfondimenti
Ho appena pubblicato una raccolta di miei articoli sulla Sicurezza Informatica dal titolo
“Sicurezza Informatica – Spunti ed Approfondimenti“
liberamente scaricabile in pdf a questa pagina mentre le versioni ebook e cartacea sono disponibili su Amazon.it
NSA and Post Quantum Cryptography
The National Security Agency (NSA, USA) has announced the “Commercial National Security Algorithm Suite 2.0” (CNSA 2.0, you can find the announcement here and some FAQ here).
There are a few points of interest related to this announcement:
- first of all, NIST has not completed the selection and standardization of all possible Post Quantum Cryptography algorithms, which should be completed by 2024, but the NSA has anyway decided to require the implementation of the algorithms already standardized by NIST (see NIST SP 800-208) and to suggest to get ready to implement the others which will be standardized in the next years; this can be interpreted as NSA has some kind of urgency in introducing the new algorithms and that it foresees that Quantum Computers able to break current cryptographic algorithms like RSA will arrive in a not too far future;
- the already standardized new PQC algorithms are to be used only for software and firmware signing, and the transition to them must begin immediately;
- the timelines are quite short considering the time it takes to implement, also in hardware, all the new algorithms, summarizing: the already standardized new PQC algorithms must be implemented by 2025 and exclusively used by 2030; all others new PQC algorithms should be supported or implemented by 2025 and exclusively used by 2033;
- the above mentioned timelines suggest that NSA believes that a Quantum Computer able to break current cryptographic algorithms like RSA could be available by 2035 or around.