The EU’s terrorist content regulation enters into force Tuesday, 7 June, stating that online platforms such as Facebook, Google and Twitter have an hour to remove flagged terrorist content. The regulation was passed in 2021 as a measure to prevent terrorist propaganda and violence-inciting content from spreading unchecked.
Public authorities such as law enforcement, interior ministries and Europol can now require a platform or cloud services to remove specific posts, music, livestreams, photos and videos inciting violence and glorifying terrorist attacks.
Any country in the EU can report a violation, which then gives the tech company an hour to act.
- Protests against VUB researcher whose racist theory inspired Buffalo terrorist
- Europol takes Europe-wide action against online hate speech
- 'What is illegal offline is illegal online': EU tightens rules for internet giants
If companies regularly fail to comply with the regulations, they can face a fine of up to 4% of their global revenue.
Users whose content is deleted would be informed and be able to contest the decision. Digital rights activists have complained these regulations may stifle free speech.