Facing calls to put guardrails on artificial intelligence (AI) development, a group of tech companies including Alphabet’s Google and OpenAI are creating an industry body to ensure that AI models are safe.
The effort, also backed by AI startup Anthropic and Microsoft, aims to consolidate the expertise of member companies and create benchmarks for the industry, according to a statement Wednesday. The group, known as the Frontier Model Forum, said it welcomed participation from other organisations working on large-scale machine-learning platforms.
The fast proliferation of generative AI tools such as OpenAI’s ChatGPT, which can create text, photos and even video based on simple prompts, has put pressure on tech giants to tread carefully. The companies involved in the Frontier Model Forum have already agreed to put safeguards in place — at the urging of the White House — before Congress potentially passes binding regulations. “This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety,” Anna Makanju, vice president of global affairs at OpenAI, said in the statement.
The forum is planning to form an advisory board in the upcoming months to assess priorities and hopes to establish a charter, governance system and funding to spearhead the effort. The group also hopes to collaborate with existing initiatives, including Partnership on AI and MLCommons. The most recent generation of AI models have provided a glimpse of the human-like intelligence that these systems are approaching.