The use of artificial intelligence (AI) in healthcare poses risks to patients due to a lack of robust legal protections, according to a new report from the National High Council for Persons with Disabilities, CSNPH.
The report, published on Thursday, emphasises that the doctor-patient relationship should remain “human-to-human.” While AI decisions are currently overseen by physicians, the Council warns that reliance on AI could become the norm.
To preserve trust between patients and healthcare providers, the CSNPH advocates for centralised validation systems for medical AI and mandatory bias testing.
These measures aim to ensure AI programming does not discriminate against certain patient groups.
The Council also suggests creating an ethical charter.
On the technical front, the report underscores the importance of rigorous methodology in AI development. It calls for diverse data samples to train systems and transparency in methods.
Training healthcare providers is deemed essential for the proper use of AI technologies.
The report also raises concerns about patient data management. It criticises the European Health Data Space initiative for enabling unrestricted data sharing across Europe, potentially undermining patient consent.
The CSNPH concludes that the European Union’s AI Act requires health-specific regulations to address these concerns.

