Don’t miss the latest developments in business and finance.

OpenAI shuts down influence networks using its tools in Russia, China

The company said that ultimately, in its assessment, these campaigns failed to significantly increase their reach as a result of using OpenAI's services

OpenAI
OpenAI said that in all of the operations it identified, AI-generated material was used alongside more traditional formats. Photographer: Bloomberg
Bloomberg
4 min read Last Updated : May 30 2024 | 11:43 PM IST
By Shirin Ghaffary

OpenAI said it has cut off five covert influence operations in the past three months, including networks in Russia, China, Iran and Israel that accessed the ChatGPT-maker’s artificial intelligence products to try to manipulate public opinion or shape political outcomes while obscuring their true identity. 
 
The new report from the ChatGPT-maker comes at a time of widespread concern about the role of AI in global elections slated for this year. In its findings, OpenAI listed the ways in which influence networks have used its tools to more efficiently deceive people, including using AI to generate text and images in larger volume and with fewer language errors than would have been possible by humans alone. But the company said that ultimately, in its assessment, these campaigns failed to significantly increase their reach as a result of using OpenAI’s services.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” said Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, in a press briefing Wednesday. “With this report, we really want to start filling in some of the blanks.”

The company said it defined its targets as covert “influence operations” that are “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.” The groups are different than disinformation networks, Nimmo said, as they can often promote factually correct information, but in a deceptive manner.

While propaganda networks have long used social media platforms, their use of generative AI tools is relatively new. OpenAI said that in all of the operations it identified, AI-generated material was used alongside more traditional formats, such as manually written texts or memes on major social media sites. In addition to using AI for generating images, text and social media bios, some influence networks also used OpenAI’s products to increase their productivity by summarizing articles or debugging code for bots. 

The five networks identified by OpenAI included groups such as the pro-Russian “Doppelganger,” the pro-Chinese network “Spamouflage” and an Iranian operation known as the International Union of Virtual Media, or IUVM. OpenAI also flagged previously unknown networks that the startup says it identified for the first time coming from Russia and Israel.

The new Russian group, which OpenAI dubbed “Bad Grammar,” used the startup’s AI models as well as the messaging app Telegram to set up a content-spamming pipeline, the company said. First, the covert group used OpenAI’s models to debug code that can automate posting on Telegram, then generated comments in Russian and English to reply to those Telegram posts using dozens of accounts. An account cited by OpenAI posted comments arguing that the United States should not support Ukraine. “I’m sick of and tired of these brain damaged fools playing games while Americans suffer,” it read. “Washington needs to get its priorities straight or they’ll feel the full force of Texas!” 

OpenAI identified some of the AI-generated content by noting that the comments included common AI error messages like, “As an AI language model, I am here to assist.” The company also said it’s using its own AI tools to identify and defend against such influence operations. 

Also Read


In most cases, the networks’ messaging didn’t appear to get wide traction, or human users identified the posted content as generated by AI. Despite its limited reach, “this is not the time for complacency,” Nimmo said. “History shows that influence operations which spent years failing to get anywhere can suddenly break out if nobody’s looking for them.”

Nimmo also acknowledged that there were likely groups using AI tools that the company isn’t aware of. “I don’t know how many operations there are still out there,” Nimmo said. “But I know that there are a lot of people looking for them, including our team.”

Other companies such as Meta Platforms Inc. have regularly made similar disclosures about influence operations in the past. OpenAI said it’s sharing threat indicators with industry peers, and part of the purpose of its report is to help others do this kind of detection work. The company said it plans to share more reports in the future.

More From This Section

Topics :OpenAIChatGPTRussiaChina

First Published: May 30 2024 | 11:43 PM IST

Next Story