Artificial intelligence research lab OpenAI has exposed and terminated accounts linked to covert influence operations using its AI technologies. The operations were managed by both state structures and private companies in Russia, China, Iran and Israel.
This is stated in the OpenAI report.
For influence campaigns, OpenAI technology was used to create social media posts, translate and edit articles, write headlines, and customize computer programs.
Russia
The influence campaign from Russia was called Bad Grammar by OpenAI. It operated mainly in Telegram and was aimed at Ukraine, Moldova, the Baltic States and the United States. The people behind Bad Grammar used OpenAIʼs AI models to generate code to launch a Telegram bot and write short political comments in Russian and English, which were then published on Telegram. All of them were illiterate, so the campaign was called Bad Grammar.
Another influence operation from Russia is called Doppelganger. Its participants used AI to create anti-Ukrainian comments in English, French, German, Italian and Polish, which were published on X and 9GAG; translation and editing of articles in English and French; turning anti-Ukrainian news articles into Facebook posts.
China
A Chinese network known as Spamouflage used OpenAIʼs AI models to research public social media activity, creating texts in various languages, including Chinese, English, Japanese and Korean, which were then published on X, Medium and Blogspot.
Iran
An Iranian campaign linked to a group called the International Virtual Media Union used OpenAI tools to create and translate long articles aimed at spreading pro-Iranian, anti-Israel and anti-American sentiment on websites.
Israel
The Israeli campaign, which OpenAI called Zeno Zeno, was run by a company that usually manages political campaigns. Zeno Zeno used AI to create fictional characters and biographies meant to replace real people on social media.
OpenAI assures that none of these influence campaigns achieved significant results through the use of artificial intelligence.
For the first time, a major artificial intelligence company has revealed how its special tools were used to perpetrate such online fraud, according to social media researchers, The New York Times reports.