At the beginning of this month, admission exams for medicine, dentistry, and veterinary studies were held in Flanders. Although cheating is somewhat expected every year, this year, three participants were caught red-handed using ChatGPT. According to a report from the Flemish examination commission, this marks the first such instance.
Out of a total of 6,211 registered candidates, the three managed to access ChatGPT despite the rule that no internet access is allowed on the secure exam computers. A technical loophole, it appears, made such access possible and was swiftly exploited.
The students caught in the act were expelled, and an official report was filed. "This is the first time we've caught students using ChatGPT," said Jan Eggermont, president of the examination commission, to VRT NWS. "But cheating has always existed," he added.

Out of a total of 6,211 registered candidates, the three managed to access ChatGPT despite the rule that no internet access is allowed on the secure exam computers. Credit : Belga/James Arthur Gerkiere
ChatGPT wasn’t the only tool used. Some candidates tried more old-fashioned tricks: one was caught wearing earphones, two pairs attempted to exchange signals through hand gestures, and one student even tried to photograph the exam. All were expelled as well. The message was quite clear: AI has no place in the exam room.
Not for Dr Laurent Alexandre, a urologist, entrepreneur, and trans-humanist provocateur, the opposite is true.
To him, not using AI is the problem. "If I were young today, I would not study medicine." he tells The Brussels Times. Coming from a qualified doctor and one of Europe's top voices on AI, it's at best an intriguing, and at worst a disturbing statement.
The case against Medical studies
Medicine, Alexandre argues, is dying – or at least, the version of it we know today. In an exclusive preview of his upcoming book Ne faites plus d'études (Don't pursue further studies) set to be released this October, Alexandre lays out the reasons he believes studying medicine is a thing of the past: "Ten years to become a GP, a waste of a digital decade, and the curriculum is totally unadapted to the AI era." he writes.
ChatGPT is four times better than doctors at diagnosis, he argues. Alexandre fears that we're churning out too many medics and that at one point, they'll be downgraded and underpaid, the money is elsewhere and the true value in healthcare lies in mastering medical AI, not stethoscopes. The future belongs to the top minds that will build startups, not see patients. Regulation by 2035 will obligate medical diagnosis to be validated by AI. This isn’t just a critique. It’s a token of remembrance.

Laurent Alexandre's new coming book Ne faites plus d'études (Don't pursue studies)
Back in Flanders, the incident now reads less like a case of cheating and more like a symptom of deeper institutional panic; like something from the TV show "Black Mirror". The students weren’t caught copying off one another, but consulting something far smarter. They were punished, yes, but their instincts, one might argue, were simply... ahead of their time. ChatGPT-4 is now outperforming trained doctors on clinical reasoning tasks, Alexandre adds: what does "cheating" even mean? Were they violating the rules? or rather exposing their irrelevance?
Egged on by figures like him, a growing number of students are beginning to question the very premise of medical education itself.
This coming Saturday, the European Commission will continue rolling out the AI Act and enter into force the governance rules and obligations for General Purpose AI (GPAI). These regulations are the world’s first comprehensive legal framework, and classify systems according to risk, mandating transparency, documentation, and safety.
Systems like ChatGPT fall under the GPAI and with them come a set of obligations such as informing users they're interacting with AI, detailing training data and safety limitations, and restricting use in high-risk domains without appropriate oversight, including healthcare. This might be a buzzkill for ' techno-optimists' and surely for Alexandre, but policy makers argue that ensuring that AI tools are rigorously tested before they are made available to the public and are subject to scrutiny afterwards is paramount. For medical schools and hospitals, it is a moment of realising that AI will no longer be a hidden assistant, it will be regulated and embedded in the workflow of diagnosis, prescription, and care.
Alexandre's vision is playing out like Huxley's Brave New World. Medical journals are now already publishing AI-assisted studies. Startups are developing diagnostic Large language models (LLMs) trained on millions of anonymised patient cases. Health insurance firms in the US are deploying AI to detect fraud – and quietly, to doubt human doctors.

Picture shows the ChatGPT. Credit : Belga./Nicolas Maeterlinck.
In Belgium and across Europe, medical students still burn the midnight oil, some of them under the influence of ADHD drugs, energy drinks, and caffeine. They slog through anatomical charts and chemistry equations. But in parallel, neural networks are learning faster; not biology but patterns. They don’t care how the human body works. They care how it behaves. They care what comes next. Which, at its core, is what medicine is about, the ability to predict and intervene in human suffering. AI in this case won't just be a tool. It will be the doctor. The rest will follow instructions.
But surely there's something inherently human that goes beyond the technicalities, however scary, however sophisticated. It is about presence, a good diagnosis doesn't end with data, it begins with trust. Some may argue.
Should you still study medicine?
Alexandre would say no. He would tell you to launch a startup, study machine learning, move fast. The future, he warns, will not wait for ten years of lectures and lab coats. And it will unfortunately be reserved for a handful. Picture a society based on meritocracy and 'natural' selection where middle skills go down the drain. He warns.
As of now, studying remains for many young people a passport to dignity, service and understanding, when those students in Flanders asked ChatGPT for help during the exam, were they undermining medicine, or were they trying to respond to it? Not to the practice of it but the bureaucracy, the lagging pedagogy, the slow failure to acknowledge that intelligence, today, is no longer exclusively human and that it's time to adapt to it.
Related News
- Francken wants no recognisable military personnel on photos
- 'Flood' of AI child sexual abuse content online
- Elon Musk's AI company apologises for 'Hitler' messages sent by Grok model
- Belgium applies for European AI factory
- Medicines for chronic illnesses soon accessible before European authorisation
- Meta will not sign EU AI code of conduct

