New EU rules seek to expose AI interactions from August 2026

People in the EU will have to be told when they are interacting with artificial intelligence or seeing certain AI-generated or manipulated content from 2 August 2026, under new rules the European Commission has set out in draft guidance.

Draft guidelines on the transparency obligations were published for feedback ahead of their adoption, the Commission said in a statement on Friday, with stakeholders invited to respond by 3 June 2026.

Under the EU’s AI Act, providers of AI systems will have to inform people when they are interacting with an AI system and add machine-readable marks so AI-generated or manipulated content can be detected.

Deployers — organisations that use AI systems — will also have to tell people when they are exposed to deepfakes, AI-generated publications on matters of public interest, and emotion recognition or biometric categorisation systems.

Code of practice due in June 2026

The draft guidelines reflect input from previous consultations and are intended to clarify the scope of the transparency obligations and help providers and deployers comply, the Commission said.

A separate code of practice on AI-generated content, drafted by independent experts, will complement the guidelines.

The final code is expected in June 2026 and will be voluntary, serving as a tool to help demonstrate compliance.

Stakeholders including AI providers and developers, businesses and public authorities, academia and research institutions, and citizens have been invited to share their views.


Copyright © 2026 The Brussels Times. All Rights Reserved.