Business Standard

OpenAI claims that tool to detect AI-generated images is 99% accurate

Murati spoke alongside OpenAI Chief Executive Officer Sam Altman, as both executives attended the Wall Street Journal's Tech Live conference in Laguna Beach, California

OpenAI

Photo: Bloomberg

Bloomberg

Listen to This Article

By Rachel Metz

OpenAI is building a tool to detect images created by artificial intelligence with a high degree of accuracy.
 
Mira Murati, chief technology officer of the maker of popular chatbot ChatGPT and image generator DALL-E, said on Tuesday that OpenAI’s tool is “99 per cent reliable” at determining if a picture was produced using AI. It’s being tested internally ahead of a planned public release, she said, without specifying a timeline.

Murati spoke alongside OpenAI Chief Executive Officer Sam Altman, as both executives attended the Wall Street Journal’s Tech Live conference in Laguna Beach, California.

There are already a handful of tools that claim to detect images or other content that has been made with AI, but they can be inaccurate. For instance, OpenAI in January released a similar tool intended to determine whether text was AI-generated, but it was shelved in July because it was unreliable. The company said it was working on improving that software and was committed to developing ways to also identify if audio or images were made with AI, too.
 

The need for such detection tools is only growing in importance as AI tools can be used to manipulate or fabricate news reports of global events. Adobe Inc.’s Firefly image generator addresses another aspect of the challenge, by promising to not create content that infringes on intellectual property rights of creators.

On Tuesday, the OpenAI executives also gave a hint about the AI model that will follow GPT-4. Though OpenAI hasn’t said publicly what a follow-up model to GPT-4 might be called, the startup filed an application for a “GPT-5” trademark with the US Patent and Trademark Office in July.

Chatbots such as ChatGPT — which uses GPT-4 and a preceding model, GPT-3.5 — are prone to making things up, also known as hallucinating; when asked whether a GPT-5 model would no longer spout falsehoods, Murati said, “Maybe.”

“Let’s see. We’ve made a ton of progress on the hallucination issue with GPT-4, but we’re not where we need to be,” she said.

Altman also addressed the possibility that OpenAI could design and manufacture its own computer chips for training and operating its AI models, rather than using those provided by companies such as Nvidia Corp., which is currently seen as the market leader.

“The default path would certainly be not to,” he said, “But I would never rule it out.”

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Oct 18 2023 | 11:31 AM IST

Explore News