Hardware Based Fully Homomorphic Encryption

Slowly but Fully Homomorphic Encryption (FHE) improves. Actually this is a dream for all: service providers (SaaS) would not need to worry about the confidentiality of their clients’ information, and clients about the risk of having confidential information processed by a third party.

In a few words, FHE provides computing on encrypted data so that the result of the computation is obtained once data is decrypted. This was mostly an idea until 2009 when Craig Gentry described the first plausible construction for a fully homomorphic encryption scheme. But the major problem of all the proposed FHE schemes is that they are extremely slow and resource intensive. But this year (see here for example) new chips should arrive on the market which implement in hardware the critical operations of FHE computations, speeding them up many times. Still, this is just another step forward to a practical FHE, there is still a long way to go, but we are getting closer.

On Deceptive Large Language AI Models

Interesting research article about how to remove backdoors and deceptive behaviour from Large Language AI models (LLM). The simplest example is a model trained to write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. The result of the study is that it can be very difficult to remove such behaviour using current standard safety training techniques. Deceptive behaviour can be introduced intentionally in the LLM during the training but it could also happen by poor training. Applying current techniques to identify and remove backdoors, such as Adversarial training, can actually fail and end up providing a false sense of security. Another result of the study is that the larger LLM seem more prone to be “Deceptive”.

Writing (in-) Secure Code with AI Assistance

This is an interesting research article on the security of code written with AI Assistance; the large-scale user study shows that code written with an AI Assistant is usually less secure, that is contains more vulnerabilities, than code written without AI support.

Thus, at least as of today, relying on an AI Assistant to write better and more secure code could work out badly. But AI is changing very rapidly, soon it could learn math and to write secure and super efficient code. We’ll see…

Post Quantum Cryptography on the Rise

Recently there have been quite a few announcements about the adoption of Post Quantum Cryptography algorithms to supplement, not yet substitute, current algorithms so that encryption will be able to withstand also when (and if) Quantum Computers will arrive. Latest is the announcement by Signal, as it has been reported for example here.

AI and the Extinction of the Human Race

The Center for AI Safety (here) has just published the following “Statement on AI Risk” (here):

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The list of signatures is impressive (just look at the first 4) and should make us think more deeply about us and AI & ML.