OpenAI Thwarts Multiple Covert Influence Operations Using AI Models

OpenAI announced it successfully shut down five covert influence operations that exploited its AI models for deceptive activities on the internet. These operations, which OpenAI halted between 2023 and 2024, originated from Russia, China, Iran, and Israel. The campaigns aimed to manipulate public opinion and influence political outcomes while concealing their true identities and intentions. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI reported. The company collaborated with the tech industry, civil society, and governments to counter these bad actors.

Addressing Election Concerns

The report comes at a critical time, with multiple elections scheduled worldwide in 2024, including in the US. There are growing concerns about the potential impact of generative AI on these elections. OpenAI’s findings revealed that influence operations have utilized generative AI to produce text and images in large volumes and create fake social media engagement through AI-generated comments.

Detailed Findings

In a press briefing, Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, highlighted the significance of the report. “Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Nimmo said. “With this report, we really want to start filling in some of the blanks.”

Russia: Operation Doppelganger

  • Activities: Generated headlines, converted news articles to Facebook posts, and created comments in multiple languages to undermine support for Ukraine.
  • Additional Use: Debugged code for a Telegram bot posting short political comments targeting Ukraine, Moldova, the US, and Baltic States.

China: Spamouflage Network

  • Activities: Researched social media activity and generated text-based content in multiple languages across various platforms.

Iran: International Union of Virtual Media

  • Activities: Generated content in multiple languages to support its influence operations.

Industry-Wide Efforts

OpenAI’s disclosure is part of a broader trend of tech companies addressing covert influence operations. For instance, Meta released a report on Wednesday detailing how an Israeli marketing firm used fake Facebook accounts to run an influence campaign targeting users in the US and Canada.

Conclusion

OpenAI’s proactive measures and transparency are crucial in mitigating the misuse of generative AI in political influence operations. By working with various stakeholders, OpenAI aims to safeguard the integrity of public opinion and political processes globally.