By Rachel Metz
OpenAI will start paying people as much as $20,000 to help the company find bugs in its artificial intelligence systems, such as the massively popular ChatGPT chatbot.
The AI company wrote in a blog post on Tuesday that it has rolled out a bug bounty program through which people can report weaknesses, bugs or or security problems they find while using its AI products. Such programs, which are common in the tech industry, entail companies paying users for reporting bugs or other security flaws. OpenAI said it’s rolling it out in partnership with Bugcrowd Inc., which is a bug bounty platform.
The company will pay cash rewards depending on the size of the bugs uncovered, ranging from $200 for what it calls “low-severity findings” to $20,000 for “exceptional discoveries.”
The company said part of why it’s rolling out the program is because it believes “transparency and collaboration” are key to finding vulnerabilities in its technology.
Also Read
“This initiative is an essential part of our commitment to developing safe and advanced AI,” said the blog post, written by Matthew Knight, OpenAI’s head of security. “As we create technology and services that are secure, reliable and trustworthy, we would like your help.”
The announcement doesn’t come as a complete surprise. Greg Brockman, president and co-founder of the San Francisco-based company, recently mentioned on Twitter that OpenAI had been “considering starting a bounty program” or network of “red-teamers” to detect weak spots.
He made the comment in response to a post written by Alex Albert, a 22-year old jailbreak prompt enthusiast whose website compiles written prompts intended to get around the safeguards chatbots like ChatGPT have in place.
“Democratized red teaming is one reason we deploy these models,” Brockman wrote.