AI is a branch of computer science that studies the development and application of systems that can be trained to perform tasks that would normally require human intelligence, one example being facial recognition. It is closely associated with machine learning (ML) which takes away the need for instructions from humans.
In its 80 pages, the report concentrates on the negative possibilities of AI, and the malicious uses to which it might be put.
“While AI and ML algorithms can bring enormous benefits to society, these technologies can also enable a range of digital, physical, and political threats,” the report reads. “Just as the World Wide Web brought a plethora of new types of crime to the fore and facilitated a range of more non-traditional ones, AI stands poised to do the same. In the continuous shift from analogue to digital, the potential for the malicious use of new technologies is also exposed.”
Among the possibilities suggested are: convincing social engineering attacks on a large scale; document-scraping malware to make attacks more efficient; evasion of image recognition and voice biometrics; ransomware attacks, through intelligent targeting and evasion; data pollution, by identifying blind spots in detection rules.
“As AI applications start to make a major real-world impact, it’s becoming clear that this will be a fundamental technology for our future,” said Irakli Beridze, head of the Centre for AI and Robotics at UNICRI. “However, just as the benefits to society of AI are very real, so is the threat of malicious use.”
AI can be used to make malware more effective, and at the same time to circumvent security measures designed to protect against malware.
“Cybercriminals have always been early adopters of the latest technology and AI is no different,” said Martin Roesler, head of forward-looking threat research at Trend Micro. “As this report reveals, it is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works,”
The report concludes with recommendations for ways to combat the development of malicious AI. Those include:
• harnessing the potential of AI as a crime-fighting tool to arm the cybersecurity industry and policing; • continuing research to stimulate the development of defensive technology; • promoting and developing secure AI design frameworks; • de-escalating politically loaded rhetoric on the use of AI for cybersecurity purposes; and • leveraging public-private partnerships and establishing multidisciplinary expert groups.