OpenAI, Microsoft, Google, and Anthropic Launch New AI Safety Group

AI

OpenAI, Microsoft, Google, and Anthropic have jointly announced the launch of the Frontier Model Forum, an AI safety research group dedicated to ensuring “safe and responsible development of frontier AI models.” The new industry body will work on identifying and sharing best practices and advancing AI research.

“The Forum defines frontier models as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks,” the four companies said in their joint press release.

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday — and get free copies of Paul Thurrott's Windows 11 and Windows 10 Field Guides (normally $9.99) as a special welcome gift!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

The new Frontier Model Forum will be opened to all organizations developing these “frontier models” in a safe and responsible way. It will also collaborate with governments, policymakers, academics, and civil society on AI safety efforts.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, Vice Chair & President of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Today’s announcement comes just a couple of days after seven top AI companies (including the four involved in the creation of this new Frontier Model Forum) made a commitment to the White House to develop AI in a responsible way. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI all promised to have their AI systems tested by independent experts and collaborate on a watermarking system for AI-generated content.

We’re at a strange moment where generative AI technology has become a key element of the competition between the leading tech companies including Microsoft, Google, and Amazon. Yet, these companies also acknowledge that they need to cooperate to make this groundbreaking technology safe.

Last week, OpenAI unexpectedly gave us a good example of why these tech giants need to cooperate on AI safety: OpenAI has just discontinued its AI Classifier, which was designed to distinguish AI-generated text from human-written text. The company said from the beginning that its classifier was not fully reliable, but it decided to end the experimental project due to its “low rate of accuracy.”

Tagged with

Share post

Please check our Community Guidelines before commenting

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Thurrott © 2024 Thurrott LLC