New EU study shows increase in on-line antisemitism during the coronavirus crisis

New EU study shows increase in on-line antisemitism during the coronavirus crisis
Demonstration against antisemitism in France

The global outbreak of the Covid-19 pandemic in 2020 was accompanied by a plethora of conspiracy theories, disinformation and hate speech, often targeting already marginalised groups.

Antisemitic content on social media has increased manyfold during the crisis according to a new study published by the European Commission last week. The study was conducted by the London-based Institute for Strategic Studies (ISD) which specialises in analysing and responding to extremism in all its forms.

Covering the period from January 2020 until March 2021, the findings show a worrying rise in antisemitic content in French and German on Twitter, Facebook and Telegram during the pandemic. The research identified 272 French language and 276 German language accounts and channels spreading antisemitic messages related to Covid-19.

Antisemitic content on Twitter, Facebook and Telegram in French have increased seven-fold and in German thirteen-fold. Telegram was the most significant platform for the proliferation of antisemitism in German, whilst in French Twitter was most prominent one. Facebook was the second most popular platform for antisemitism in both languages.

According to the study, French and German antisemitic accounts had a combined following of almost 5.6 million followers.

“During the pandemic, Jews have been baselessly blamed for creating and spreading the virus on purpose,” commented European Commission Vice-President Margaritis Schinas at a meeting of the working group on the implementation of the Council declaration on the fight against antisemitism (3 June).

“We have seen people trivialising the Holocaust, by comparing democratically established public health measures to save lives and protect our societies against the virus, to the dehumanising treatment and extermination of millions under the Nazi regime.

“Even in the context of the broader “infodemic” that ensued after the coronavirus outbreak, these figures are shocking – and a clear call to action. Antisemitic crimes and hate speech must be unequivocally condemned and brought to justice.”

Anti-Jewish agitation during major health crises is nothing new. Many commentators have pointed towards the pogroms during the Black Death in the Middle Ages, when Jewish people were killed after they were falsely accused of poisoning drinking water. This old antisemitic trope resurfaced in 2020, often linked to the far-right QAnon conspiracy theory which originated in the US.

Not only Jewish communities became scapegoats. There have also been repeated reports of abuse and discrimination against other minority groups, for example people of Asian and Chinese descent, who were blamed for either spreading the virus or undermining public efforts to combat the pandemic.

The most dominant antisemitic narratives were conspiracy theories about Jews ruling international financial, political and media institutions, which comprised 89% of German antisemitic content and 55% of French, according to a manually coded sample of posts. Examples of overt Holocaust denial can still be found in French and German channels despite it being a criminal offence in both countries.

Most antisemitic content that crossed the threshold of the International Holocaust Remembrance Alliance (IHRA) Working Definition of antisemitism was non-violent and not obviously illegal under German and French law. The definition is legally not binding and includes examples of extreme anti-Israeli incitement as examples of antisemitism depending on context.

Addressing the proliferation of such “legal but harmful” antisemitic content provides a considerable challenge for tech companies and governments alike, according to the report.

In May 2016, the European Commission agreed with Facebook, Microsoft, Twitter and YouTube on a “Code of conduct on countering illegal hate speech online”. In the following years, Instagram, Snapchat, Dailymotion and TikTok joined the Code of Conduct.

While there are differences between the internet operators, the regular evaluations of the Code of Conduct show that it delivers positive results. On average 90% of the notifications from a network of trusted flaggers on illegal hate speech are reviewed within 24 hours by the operators and 71% of the content is removed by them.

However, there are ways of circumventing the Code of Conduct, which currently is limited to on-line hate speech which is publicly available. Private messaging platforms such as WhatsApp and Telegram are not covered by the Code. The German Network Enforcement Act (NetzDG), which seeks to implement laws against incitement of hate and violence, does not include Telegram.

Can the Codes be strengthened?

The Code of Conduct is, as the study points out, a self-regulatory tool and appears not to have been very effective in deleting antisemitic content during COVID-19. Does it need to become more enforceable?

“Our report reveals the limitations of self-regulatory approaches to tackling harmful antisemitic content online,” said Milo Comerford, Senior Policy Manager at ISD and co-author of the report, to The Brussels Times.

“Previous ISD research has demonstrated the limitations of such platform-led approaches, showing that self-regulatory efforts such as the 2018 EU Code of Practice on Disinformation were fundamentally challenged by a lack of enforcement for non-compliance – pointing in particular to a clear lack of transparency around tech platform responses to antisemitism.”

“The report calls for a comprehensive digital regulatory regime to counter antisemitism and other online harms at a European level, recognising that antisemitism is related to a wide spectrum of threats, some legal and some illegal – from disinformation to conspiracy theories, illegal hate speech to terrorist content.”

He adds that the publication of blatantly illegal antisemitic content that violates the EU Framework Decision on Racism and Xenophobia indicates that there are still major gaps in enforcement of platforms terms of service designed to counter hate speech, and antisemitism in particular.

“Social media companies have faced particular challenges with moderation capacity during the pandemic, relying more on automated approaches which constitute more of a blunt tool for detecting and responding to harmful content. This appears to be allowing more antisemitic content to slip through the net.”

The research has shown that Telegram in particular has served as one of the major vectors for the spread of extremism, hate speech and conspiracies during the pandemic. Is it possible to include Telegram and WhatsApp under the Commission Code of Conduct (against on-line hate speech) and Code of Behaviour (against on-line disinformation)?

“Encrypted messenger apps sit in a grey area for regulation, on the one hand serving as peer-to-peer communication services with a reasonable expectation of privacy, but also hosting larger groups (with memberships up to the tens of thousands) which behave much more like traditional social media platforms,” he replied.

Yet because of its size and nature, national regulation such as the German NetzDG legislation does not apply to Telegram. “It is essential that policy approaches also focus on addressing antisemitism across smaller platforms, including those in the ‘alt-tech’ domain, such as Telegram, where extremists and conspiracy actors are increasingly migrating as they face increasing pressure on major platforms.”

“Online antisemites seem to be well aware of the different standards applied to Telegram vs other social media companies. For example, researchers observed one German conspiracy theory group who posted relatively veiled antisemitic content on their Facebook page, but shared an explicitly antisemitic video of an anti-Jewish influencer on Telegram.”

EU measures against hate speech and disinformation

A Commission spokesperson explained that under the Code of Conduct, IT companies have committed to delete content that is illegal under national law and violate the Framework Decision on Racism and Xenophobia, including Holocaust denial and trivialisation.

The Framework Decision bans public incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin. The Commission has recently launched infringement procedures against a number of member states to ensure the proper implementation of the Decision

In respect of content that is legal and that may be protected by the right to freedom of expression, other measures than removals, such as the promotion of positive counter narratives should be considered, he added.

As regards disinformation that is harmful, but not illegal, the Commission is also working since 2018 with major online platforms as well as the advertising sector to curb the spread of viral conspiracy narratives through the self-regulatory EU Code of Practice on disinformation.

The Commission has recently (26 May) published a Guidance setting out its views on how signatories should strengthen the Code of Practice and create a more transparent, safe and trustworthy online environment.

“We want the signatories to become more effective at demonetising purveyors of disinformation. They should also put in place safeguards for all forms of manipulative behaviour on their platforms, give additional transparency about their algorithmic recommendations and provide researchers with sufficient access to data. For the revised Code, we will devise clear performance indicators that will underpin a comprehensive monitoring framework.”

In December 2020 the Commission also adopted a proposal for the Digital Services Act (DSA), which includes series of measures to reduce the prevalence of illegal content, including illegal hate speech and safeguards protecting users’ rights online.

According to the proposal, the DSA will introduce a horizontal framework setting the obligations for online platforms to act diligently and ensure the safety and the respect of fundamental rights in the online space. The DSA will not touch upon national or EU laws that specify what is illegal but it will include harmonised measures on how to treat content, which is illegal according to EU or national laws.

M. Apelblat

The Brussels Times


Copyright © 2024 The Brussels Times. All Rights Reserved.