EU politicians split between innovation and human rights in AI regulation bill

AI systems could lead to mass surveillance and discrimination if MEPs do not include the right safeguards, warns civil society.

EU politicians split between innovation and human rights in AI regulation bill
Credit: Pixabay

In some ways, artificial intelligence (AI) has the potential to bring huge benefits to European society, but in others, the technology carries grave risks for society and individuals. To mitigate these risks, EU politicians are working on a bill to regulate AI through a so-called 'risk-based' approach.

AI's potential harm is considerable. Civil rights groups stress the damage boosting facial recognition systems in public areas and thus harm the right to privacy and non-discrimination.

"If the EU doesn't take a stance on key topics, we risk spiralling into a mass surveillance society," said Sarah Chander of European Digital Rights (EDRi).

"Certain types of technology are taking us there, like the use of facial recognition and biometric technology in public... These systems are deployed in public spaces, even if public authorities say it is used in a limited scenario, there is mass application."

En route towards mass surveillance

To lessen the danger, the EU Commission's proposal for a risk-based approach will ban AI systems classified as having 'unacceptable' risks – such as for social scoring purposes.

Yet there are AI systems bypassing the high-risk category. Facial recognition technology (FRTs) is becoming more and more commonly used for policing and security purposes, but given its intrusiveness, there are major concerns on discrimination and the right to data protection and privacy.

Although the Parliament's AI draft would ban real-time FRTs, one of the most controversial points of the bill, the text in its current form would authorise Member States to use them for specific security purposes.

This would then allow Member States to start using FRTs broadly in the name of security, where the concern on rights are.

Previously, Member States repeatedly defied EU rules on user data privacy for law enforcement purposes by retaining personal data acquired form telecommunication companies, which gave law enforcement access to information as such as user location data.

While the EU allows limited data retention for security reasons, Member States have retained people's data en masse, breaching the right to privacy.

Under current AI regulation, EU Member States may be able to do the same with FRTs. However, this time, it would have more wide-ranging consequences for citizens and ownership of the public domain.

"Civil society would say there should be an outright ban on facial recognition systems, they cannot be justified due to the risk of mass surveillance and the infringement of freedom to be in public society," said EDRi's Sarah Chander.

Dutch welfare scandal

The increased use of AI in decision-making by public authorities or large companies can have vast implications, especially for marginalised groups, Chander pointed out.

In 2019, the Netherlands was rocked by a benefits scandal in which the Dutch tax authorities used an algorithm to create "risk" profiles of people in order to find childcare benefits fraud.

Authorities targeted families over fraud suspicions due to the algorithm's risk assessment. In the end, thousands of families, mostly from ethnic minorities or lower-income groups - were denied legitimate claims for assistance while thousands of children were even put into foster care.

Such risks have led NGOs such as EDRi to call for amendments to the AI Act to allow individuals the right to redress if affected by AI systems, which wasn't in the Commission's original proposal. It is unclear if their suggestions will be included in the final text.

Do regulations hinder innovation?

Even as civil society groups call for more safeguards in the EU, business actors believe that if the proposal becomes too restrictive, investors may shy away from the European market, opting rather for the US or China.

"AI is a booster of digitalisation. If we aren't doing it right, I can't see Europe taking part in digital development any longer," said Axel Voss, a German MEP and shadow negotiator from the Parliament's conservative EPP group.

To support innovation, the Commission proposed the development of so-called sandboxes, so that a Member State can create a controlled environment to develop and test AI systems before they are put on the market.

Even as the bill aims to beef up innovation, the AI act alone cannot help the EU catch up with other major players, according to one of the Parliament's two lead negotiators Ioan-Dragos Tudarache from the liberal Renew group.

"We have fallen behind in this competition way before we started thinking about the text. The bill won't make it better, but can make it worse if we don't find the right balance."

Finding that balance is easier said than done, as Parliament is divided between wanting to make the EU an investment-friendly place, but also having regulations in place to protect citizens from potential abuses of the technology.

The bill is still to be negotiated with the other EU institutions, expected in 2023. Despite the delays and internal divisions, balancing innovation without compromising fundamental rights remains crucial for Tudarache.

"We don't have to use our imagination to see the potential for tech to encroach on rights, it's enough to look at regimes such as China. More governments are tempted to use technology in ways that are at odds with how we regard democracy and our values."


Copyright © 2024 The Brussels Times. All Rights Reserved.