The European police agency, Europol, warned against the possible abuse of chatbots such as ChatGPT on Monday. It appears that criminals are able to use these programs for phishing, a technique designed to steal personal and sensitive data, cybercrime, propaganda or disinformation, the agency said.
The primary objective of this warning is to serve as an awareness campaign, in order to start a dialogue with companies active in the field of artificial intelligence (AI), and to encourage them to put precautionary measures in place.
Europol is stressing the importance of developing reliable artificial intelligence systems.
The ChatGPT programme can generate responses to user-generated prompts on its own, based on a large amount of data it finds in books, articles or websites.
This new AI system was introduced to the general public last November, and has quickly become an internet sensation. The uninitiated can easily be misled into assuming that texts or answers to their questions are written by a human, rather than a machine.
- AI software ChatGPT almost smart enough to pass tough medical exam
- Antwerp University investigates student over suspected Chat GPT-generated paper
- Unethical outsourcing: ChatGPT uses Kenyan workers for traumatic moderation
Its ability to draft high-quality responses to a prompt makes it a useful phishing tool, according to the agency. Known as intelligent large language models (LLMs), this form of AI is able to reproduce language patterns and make it possible to impersonate a person or group. This makes it easy for victims to trust the criminals.
It can also be a valuable tool for criminals with little technical knowledge. In order to counter such misuse, investigators should therefore keep a close eye on these technological developments.