Don’t miss the latest developments in business and finance.

OpenAI, Anthropic agree to work with US institute on upcoming safety test

Under the agreements, announced on Thursday, the US AI Safety Institute will receive early access to major new AI models from the companies to evaluate capabilities and risks

OpenAI
The US AI Safety Institute was created in 2023. | Photo: Bloomberg
Bloomberg
2 min read Last Updated : Aug 29 2024 | 10:41 PM IST
By Shirin Ghaffary

The US government announced agreements with leading artificial intelligence startups OpenAI and Anthropic to help test and evaluate their upcoming technologies for safety.
 
Under the agreements, announced on Thursday, the US AI Safety Institute will receive early access to major new AI models from the companies  to evaluate capabilities and risks as well as collaborate on methods to mitigate potential issues. The AI Safety Institute is part of the Commerce Department’s National Institute of Standards and Technology, or NIST. The agreements come at a time when there’s been an increasing push to mitigate potentially catastrophic risks of AI through regulation, such as the controversial California AI safety bill SB 1047, which recently passed the state Assembly.

“Safety is essential to fueling breakthrough technological innovation,” Elizabeth Kelly, director of the AI Safety Institute, said in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The group will work in close collaboration with the UK’s AI Safety Institute to provide feedback on potential safety improvements, it said in the statement. Previously, Anthropic tested its Sonnet 3.5 model in coordination with the UK AI Safety Institute ahead of the technology’s release. The US and UK organizations have previously said they will work together to implement standardized testing.

“We strongly support the US AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models,” OpenAI Chief Strategy Officer Jason Kwon said in a statement. “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

Anthropic also said it was important to build out the capacity to effectively test AI models. “Safe, trustworthy AI is crucial for the technology's positive impact,” said Jack Clark, Anthropic co-founder and head of policy. “This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI." 

The US AI Safety Institute was created in 2023 as part of the Biden-Harris administration’s Executive Order on AI, and is tasked with developing the testing, evaluations and guidelines for responsible AI innovation.

Also Read

Topics :OpenAIartifical intelligenceAI ModelsUS government

First Published: Aug 29 2024 | 10:41 PM IST

Next Story