Child sexual abuse content generated by artificial intelligence is on the rise and increasingly realistic, warn experts at the Internet Watch Foundation (IWF). The organisation is calling on EU lawmakers to criminalise the practice.
Between 1 January and 30 June, IWF confirmed reports of AI child sexual abuse (CSA) imagery on 210 web pages. This is a 400% rise compared to the same period in 2024.
Within those same six months, the number of AI videos of child sexual abuse "rocketed", with 1,286 individual videos identified compared to just two in the same period last year.
Additionally, just over 1,000 of these videos showed particularly extreme abuse, showing depictions of rape, sexual torture or bestiality, according to the IWF's recent report on Friday.
'Indistinguishable'
To add to the concerning trends, IWF noted that AI developments have made these videos increasingly realistic.
"The first fully synthetic child sexual abuse video we saw at the beginning of last year was just a series of jerky images put together, nothing convincing," said an IWF senior analyst. "Now they have really turned a corner. The quality is alarmingly high, and the categories of offence depicted are becoming more extreme."
IWF Interim Chief Executive, Derek Ray-Hill, underscored that the AI videos have become "indistinguishable" from genuine images. "The harm this material does is real, and the threat it poses threatens to escalate even further," he added.
Legal loopholes?
For the IWF, more needs to be done to criminalise these practices at a European level. Last month, the European Parliament approved draft legislation to criminalise explicitly the use of AI in CSA. The IWF is calling on the Council of the EU to follow suit and ensure a robust criminalisation tackling AI CSA manuals and models across the EU.
"The EU is making great strides on this vital issue and it must not falter. We urge the Council to rethink its position and align with the EU Parliament to pass legislation that is fit for purpose and ensures that survivors are at the heart of all decisions made," said Ray-Hill.
The organisation worries that the Council's current position does not cover legal "loopholes" which could allow the possession of AI-generated CSA "personal use".
"We must do all we can to prevent a flood of synthetic and partially synthetic content joining the already record quantities of child sexual abuse we are battling online," added Ray-Hill.
The Internet Watch Foundation (IWF) is a not-for-profit organisation that identifies and removes online images and videos of child sexual abuse.
Related News
- Children as young as eight posting sexual content online in Belgium
- 'Alarming' surge in child sexual abuse images hosted in EU, report shows
- Gaps in the system: thousands of missing children in Europe every year
- 'Pay attention': Big rise in child sexual exploitation cases in Belgium
- Doctor and top AI expert: 'Studying medicine is a waste of time'

