The European Parliament will debate the Artificial Intelligence Act on Tuesday to ensure more secure and transparent artificial intelligence by imposing several obligations on AI application developers, from chat robots to smart household appliances and self-driving cars.
As AI is rapidly entering people's daily lives, concerns about its security and transparency are growing. With the Artificial Intelligence Act, the EU wants to promote innovation, but also impose ethically responsible rules.
"The aim is to create a clear framework within which developers of AI applications can work, but at the same time develop a kind of AI that protects citizens," MEP Tom Vandenkendelaere explained on Flemish radio on Tuesday.
The term 'artificial intelligence' is very broad and can already be found in a wide range of applications, he said. "With the law, we want to divide those applications into a number of categories which are associated with a certain risk. When an application's risks increase, the manufacturer's obligations also increase."
Risks ranging from minimal to unacceptable
Think of AI in robotic surgery, for example. As it is a very high-risk application, the developer will have to find more ways to protect the citizen, such as risk analyses or human supervision of that application.
"In contrast, there are simple chatbots on the websites of banks or airline companies for which there are very few risks," Vandenkendelaere said. "In fact, users only need to be told that they are in contact with a computer and not a person, so they can make an informed choice."
The Artificial Intelligence Act distinguishes four risk categories:
- minimal risk AI (spam filters, AI video games, etc.) which will be allowed freely
- limited risk AI (chatbots on banking or airline websites, etc.) which will have to comply with transparency obligations
- high-risk AI (self-driving cars, robotic surgery, etc.) which will be subject to strict control obligations
- unacceptable AI (for social scoring, etc.) which will be banned
AI applications that have generated a lot of attention in recent months were the so-called 'chatbots', such as ChatGPT, My AI from the social medium Snapchat or the chat function of the search engine Bing. While there are many advantages to the bots, they also pose dangers as they often give incorrect information and sometimes even dangerous advice.
Tweet translation: "The European AI Act is aiming to strike a balance between giving innovation a chance and protecting citizens. From AI in robotic surgery to the spam filter in your inbox."
"The chatbots are a special case, because when the European Commission worked out the bill in 2021, they did not yet exist," said Vandenkendelaere. "We now want to include an extra category in the legislation to ensure that those generative AI applications are also subject to conditions, which gives us a better understanding of how they function."
This means that there will be a screening of possible risks, based on transparency about how those applications are put together. "If we then determine that it poses a great risk to people, it will of course fall into the high-risk category. If we find that there is no risk, it will end up in a lower category."
The question, however, is whether the legislation will come too late: experts from the sector, at Apple and Google for example, have already expressed their concerns about AI in recent months, predicting an explosion of fake news and scams.
Starting an evolution
"I do not think temporarily stopping the development of AI applications is a solution," said Vandenkendelaere. "We have to lay down clear rules, as we already have done for many other products in Europe, such as cars or washing machines. The rules will help us learn to deal with what is new and to control the risk."
Europe is lagging behind at the international level, as the leading countries in AI development are mainly the United States, home to tech giants such as Google and Microsoft, and China. "If we do not want to fall hopelessly behind, we really need to take a step forward in terms of legislation in Europe today."
However, critics fear that the new regulations might discourage entrepreneurs from investing in AI applications in Europe, as the law will only apply to European companies. "But the aim is to start an evolution about how we should deal with AI, just like with GDPR legislation at the time," he said.
Related News
- Artificial intelligence poses 'extinction' threat to humanity, experts warn
- G7 to push for 'responsible' use of AI tools like ChatGPT
- Belgian man dies by suicide following exchanges with chatbot
"The United States has opted for innovation, the Chinese have chosen to control people with AI. With this legislation, we are trying to take a step towards a worldwide standard," said Vandenkendelaere. "Today, we are on the verge of an AI breakthrough, just as we were on the verge of an internet breakthrough in the 1990s or the major digital platforms of the 2000s."
The law will be discussed in the European Parliament on Tuesday, followed by a vote on Wednesday. Final negotiations at the European Council, in which all Member States are represented, will then take place.
"The hope is that the dossier will be finalised before next year's elections, perhaps under the Belgian presidency," Vandenkendelaere said, in which case the new rules would come into effect from 2025.