Artificial Intelligence (AI), in all its different fields from Machine Learning to Generative Adversarial Networks, has been subject to a study (here the link to the paper), or probably better an evaluation, by a group of Subject Matter Experts (SMEs) to identify the most risky scenarios in which attackers could use it, abuse it or defeat it. The scenarios include cases in which AI is used for security purposes and an attacker is able to defeat it, or AI is used for other purposes and an attacker is able to abuse it to commit a crime, or an attacker uses AI to build a tool to commit a crime.
Overall the SMEs have identified 20 high level scenarios and ranked them by multiple criteria including the harm / profit of the crime, and how difficult it could be to stop or defeat this type of crime.
It is very interesting to see which are the six scenarios considered having highest risk:
- Audio/video impersonation
- Driverless vehicles as weapons
- Tailored phishing
- Disrupting AI-controlled systems
- Large-scale blackmail
- AI-authored fake news.
More details can be found in the above mentioned paper.