Will a new code of conduct stop on-line hate speech in EU?
Tuesday, 31 May 2016
photo credit: Tirza van Dijk/unsplash.com
The European Commission together with social media companies has agreed on a code of conduct to combat the spread of illegal hate speech on-line in Europe.
The agreement, that was announced today (31 May), is supported by Facebook, Twitter, YouTube and Microsoft (“the IT companies”). They share, together with other platforms and social media companies, a collective responsibility and pride in promoting and facilitating freedom of expression throughout the online world.
However, the Commission and the IT Companies recognize that the spread of illegal hate speech on-line not only negatively affects the groups or individuals that it targets, it also negatively impacts those who speak out for freedom, tolerance and non-discrimination in our open societies and has a chilling effect on the democratic discourse on online platforms.
The Commission states that in order to prevent the spread of illegal hate speech, it is essential to ensure that relevant national laws transposing the Council Framework Decision on combating racism and xenophobia are fully enforced by Member States in the online as well as the in the offline environment.
“The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred,” said Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality.
Twitter’s Head of Public Policy for Europe, Karen White, commented:“Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society. We remain committed to letting the Tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate”.
The code of conduct includes a number of public commitments such as putting in place clear and effective processes to review notifications regarding illegal hate speech so that it can be removed and access to such content disabled.
The review of the majority of valid notifications for removal of illegal hate speech should be done in less than 24 hours.
A “notice-and-action” procedure begins when someone notifies a hosting service provider – for instance a social network, an e-commerce platform or a company that hosts websites – about illegal content on the internet (for example, racist content, child abuse content or spam) and is concluded when a hosting service provider acts against the illegal content.
The IT Companies and the European Commission agreed to assess the public commitments in the code of conduct on a regular basis, including their impact. To this end, regular meetings will take place and a preliminary assessment will be reported to the High Level Group on Combating Racism, Xenophobia and all forms of intolerance by the end of 2016.