Dublin riots reveal EU blind spot in online content moderation

Dublin riots reveal EU blind spot in online content moderation

Investigations into the role of disinformation in the explosive Dublin riots on 23 November reveal a potential deficiency in EU legislation designed to hold Big Tech accountable for illegal content online.

Anti-immigration riots in reaction to the stabbing of three children and two adults outside a school in the Irish capital has left the country shaken but determined to find a suitable response to the country’s small but potent far-right faction.

Far-right circles used X (formerly Twitter) and other social media platforms to propagate misinformation and whip up anti-immigrant sentiment. One widely circulated WhatsApp voice note in the lead-up to the riots incited people to go into the city centre and "kill" foreigners.

Ireland's resolve to grapple with its far-right problem has thus naturally turned to the digital sphere. As part of its response to this unprecedented level of violence, the Irish government activated an incident protocol that called on EU officials to instigate talks with major social media platforms and remind them of their obligation to remove illegal content under the Digital Services Act (DSA), which came into force in August.

There were initial concerns that the Irish language was used to dodge content moderation, but this was proved untrue.

"There is currently no evidence that the lack of Irish content moderators has been used as a guise for evading moderation," EU Commission spokesperson for the digital economy Johannes Bahrke told The Brussels Times. "But platforms that do not have sufficient capacity to moderate content in Irish are vulnerable on that point."

Blind spot

The DSA requires platforms with user bases larger than 45 million (designated ‘Very Large Online Platforms’, or VLOPs) to remove content that includes illegal hate speech, incitement to terror, and disinformation.

The protocol triggered three-way conversations between Ireland, the EU and VLOPs. Although Irish did not serve as a vehicle for disinformation, the linguistic blind spot raises the alarming question about a severe lack of non-English-speaking content moderators across Europe.

A study by Global Witness indicates that over-reliance on English-speaking content moderators leaves plenty of gaps for harmful content to wriggle through the ground-breaking DSA.

Under the legislation, VLOPs must submit bi-annual reports detailing the human resources dedicated to content moderation, broken down into each of the EU’s official languages.

The first reports, published in November, show that while Meta employs 42 Irish-speaking content moderators across Facebook and Instagram, there are none employed by X.

Irish not an outlier

In addition, X does not employ a single content moderator who can speak Estonian, Greek, Hungarian, Lithuanian, Maltese, Slovak, Slovenian, Czech, Danish, Finnish, Romanian, or Swedish. All in all, only 8% of the content moderators working for X are proficient in an official EU language that is not English.

UNESCO and Ipsos recently conducted a survey in 16 countries where general elections will be held in 2024. It found that 87% of internet users are worried about the impact of disinformation on the upcoming elections in their country, with 47% being "very concerned".

Social media proved fertile ground for disinformation and incitement of hatred in the run-up to violence in Dublin two weeks ago. The absence of non-English-speaking content moderators should sound the alarm ahead of 2024, when elections in Ireland, Belgium, the EU and elsewhere will take place.

Bahrke says the figures from the first reports confirm an "issue" with X’s content moderation. He highlights the importance of having content moderators who not only possess language knowledge but cultural context too.

Related News


Copyright © 2024 The Brussels Times. All Rights Reserved.