Google, Microsoft, Anthropic and OpenAI, four companies at the forefront of the latest-generation artificial intelligence race, announced on Wednesday the creation of a new professional organisation to combat the risks associated with AI technology.
The new 'Frontier Model Forum' is tasked with promoting the “responsible development” of the most sophisticated artificial intelligence models and “minimise future risks,” according to a press release.
The members are committed to working with each other and with policymakers, academics and others to share knowledge about AI safety risks, according to a blog on Google's site.
The rapid deployment of generative AI, through interfaces such as ChatGPT (OpenAI), Bing (Microsoft) or Bard (Google), has been causing a great deal of concern among authorities and civil society.
The European Union (EU) is finalising draft regulation on AI that should impose obligations on companies in the sector, such as transparency with users and human control over machines.
In the United States, political tensions in Congress are preventing any efforts in this direction. The White House is therefore encouraging the groups concerned to ensure the safety of their products themselves, in the name of their “moral duty,”as US Vice-President Kamala Harris said in early May.
Last week, Joe Biden’s administration obtained “commitments” from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI to respect “three principles” in the development of AI – safety, security and trust.
In particular, they are expected to back-test their programmes, combat cyber-attacks and fraud, and find a way to mark AI-generated content so that it can be clearly authenticated as such.
The leaders of these companies are not denying the risks, on the contrary. In June, Sam Altman, the boss of OpenAI and Demis Hassabis, the head of DeepMind (Google), in particular, called for a fight against “the risks of extinction” for humanity “linked to AI.”
At a congressional hearing, Sam Altman supported the fashionable idea of creating an international agency responsible for the governance of artificial intelligence, as exists in other fields.
In the meantime, OpenAI is working towards so-called “general” AI, with cognitive capabilities that would be similar to those of humans.
In a publication dated 6 July, the Californian start-up defined AI 'frontier models' as fundamental programmes that are highly sophisticated and could include dangerous capabilities of such a scale as to pose serious risks to public safety.
Dangerous capabilities can emerge unexpectedly, OpenAI further warns, noting that it is difficult to really prevent a deployed model from being abused.

