Like a lot of people, I think about the trustworthiness of AI.
Do users corroborate the information they generate? Who ends up forming close friendships with chatbots? And the big one: if AIs are smarter than us, do we entrust them with our best interests?
But now the question of trust is widening. Rather than just focusing on the performance of algorithms and models, we are being asked to look more closely at the people who actually own and control those tools.
A highly critical profile of Sam Altman in a recent edition of The New Yorker makes plain the importance of the question.
Altman is the chief executive of OpenAI, which developed ChatGPT but which started out as a non-profit science and research venture. The aim was to make safe artificial general intelligence that mimics human cognitive abilities.
Early on Altman was accused by then-collaborator Elon Musk of tricking him into helping bankroll the venture — Altman's "long con" as Musk calls it.
Later on, as OpenAI's technology advanced, colleagues accused Altman of bypassing safety protocols and putting commercial considerations first.
That's when OpenAI tried to fire him. Altman retreated to his $27-million home from where he reportedly warned his doubters of efforts to damage their reputations. With support from a phalanx of lawyers and advisers, and from Microsoft, Altman got himself reinstated in under a week.
His doubters were mostly purged and questions about his integrity apparently swept under the rug.
Altman is portrayed throughout the New Yorker article as someone who tells people what they want to hear but often at the expense of the truth. He appears to flip flop from being an AI doomer, who regards AI as an existential danger, to a techno-optimist, who is prepared to play fast-and-loose with humanity's future based on his strategic interests.
Since his short-lived firing in November 2023, Altman has continued pushing OpenAI further from its founding principles. That includes rapid commercialisation and collaborations with the US military.
The New Yorker article runs to a whopping 15,000 words and it does little to improve the image of the AI industry's reputation for hubristic self-regard.
Hunger for electricity to power vast data centres? Disrespect for copyrights on books and music? Abysmal products like nudification apps? In my experience AI lobbyists acknowledge such problems but they make a show of studied aloofness to convey that such misfortunes are mere speed-bumps on the highway to progress.
It's a stance that companies too cosy with government leaders and big media can afford to take.
The evidence of influence is plain to see, not only within the Trump Administration, or in the Gulf states, but in Europe too — from the way that Italian Prime Minister Giorgia Meloni has been trying to give Elon Musk a contract to run her country's sensitive communications systems, to the way Germany's Federal Minister for Digital Transformation, Karsten Wildberger, gave the laudatory speech at an awards ceremony for Altman that was organised by publisher Axel Springer.
Government and technology are increasingly intertwined even as some AI models such as Anthropic’s Mythos are seemingly capable of outwitting national cybersecurity authorities.
The boundaries between state and non-state tech actors are blurring. But at what cost to the public good?
Take for example the provision of social assistance, which has long been a prerogative of states that can provide the stability that competitive markets lack.
Here too there are concerns about the way tech bosses like Altman see the future.
Six years ago, Altman backed large-scale tests for a Universal Basic Income, or UBI. One thousand people in the US states of Illinois and Texas were given $1,000 a month obligation-free between 2020 and 2023.
The idea was to test how a regular cash income paid on an individual basis without means testing or work requirements would function.
The experiment yielded some positive results. The spectre of mass unemployment from AI and automation also made Altman’s initiative extremely timely, and the experiment provided a fillip to those who have long favoured basic income on the grounds that it frees individuals to make better choices.
Altman's thinking has since evolved. He now advocates what he calls Universal Basic Wealth — not based on cash — but on "quintillions" of "tokens" generated by AI and distributed between the owners of the technology and all the rest of us.
"Instead of just like getting a cheque … everybody on Earth is getting like a slice of the world's AI capacity," Altman told podcaster Theo Von. "Then we're letting the like massively distributed human ingenuity and creativity and economic engine do its thing."
But Altman's vision for the private sector taking the lead undermines the rationale for a basic income in the first place.
Altman’s wealth plan almost certainly means, "more insecurity, because some of these companies will go bust”. That’s what the celebrated philosopher and author of a landmark book on basic income, Philippe Van Parijs, told me. A basic income requires “some sort of public authority, whether it's at the local level or at the national level or the global level," said Van Parijs.
In other words, the turn from a Universal Basic Income to Universal Basic Wealth looks like another Altmanesque zigzag. But it’s also another sign of a tech boss putting their own interests above ours, even as their algorithms and models lay claim to custody of our future.
The spread of AI may be inevitable. What must not be, as the accumulating warnings show, is who controls it.
Listen to Philippe Van Parjis speak about Freedom in the Age of the Algorithm in a live show for EU Scream recorded at the Flagey theatre in Brussels.

