Cybercriminals are increasingly using artificial intelligence tools, including OpenAI's ChatGPT, to aid in their malicious activities. (File photo: Reuters/Dado Ruvic)

OpenAI sees continued attempts by threat actors to use its models for election influence

· CNA · Join

OpenAI has seen a number of attempts where its artificial intelligence models have been used to generate fake content, including long-form articles and social media comments, aimed at influencing elections, the ChatGPT-maker said in a report on Wednesday (Oct 9).

Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the start-up said.

So far this year it neutralised more than 20 such attempts, including a set of ChatGPT accounts in August that were used to produce articles on topics that included the United States elections, the company said.

It also banned a number of accounts from Rwanda in July that were used to generate comments about the elections in that country for posting on social media site X.

None of the activities that attempted to influence global elections drew viral engagement or sustainable audiences, OpenAI added.

OpenAI said that, since its last influence and cyber operations report in May, it had "continued to build new AI-powered tools that allow (it) to detect and dissect potentially harmful activity".

"While the investigative process still requires intensive human judgment and expertise throughout the cycle, these tools have allowed us to compress some analytical steps from days to minutes," it said.

"As we look to the future, we will continue to work across our intelligence, investigations, security research and policy teams to anticipate how malicious actors may use advanced models for dangerous ends and to plan enforcement steps appropriately," the company added.

"We will continue to share our findings with our internal safety and security teams, communicate lessons to key stakeholders, and partner with our industry peers and the broader research community to stay ahead of risks and strengthen our collective safety and security."

There is increasing worry about the use of AI tools and social media sites to generate and propagate fake content related to elections, especially as the US gears up for presidential polls.

According to the US Department of Homeland Security, the US sees a growing threat of Russia, Iran and China attempting to influence the Nov 5 elections, including by using AI to disseminate fake or divisive information.

OpenAI cemented its position as one of the world's most valuable private companies last week after a US$6.6 billion funding round.

ChatGPT has 250 million weekly active users.

CASE STUDIES

One cross-platform influence operation that used ChatGPT to produce content related to the US elections was known as Storm-2035.

Identified as originating in Iran, the operation involved the use of ChatGPT to generate long-form online articles and short
comments in English and Spanish.

"We identified the comments being posted by more than a dozen fake personas on X and one on Instagram, the articles on five websites," OpenAI said.

The articles, which were usually about 700 to 900 words long – "primarily focused on the United States, including with references to the presidential and vice-presidential candidates in this year’s elections".

Some of the websites they were posted on appeared to have a progressive perspective while others appeared to lean conservative.

Similarly, the X accounts associated with the short messages appeared to be "partisans of both main candidates in the US presidential election", former president Donald Trump and Vice President Kamala Harris.

AI-generated posts about US election candidates by two different accounts on X. These posts garnered low to no engagement before the accounts were suspended. (Image: OpenAI)

In the Rwandan operation, ChatGPT was used to generate partisan comments, which usually included hashtags, that were posted on X ahead of the country's elections in July.

"We identified the comments being posted on X by a range of accounts, some of which posted at very high volumes, with hundreds of tweets per hour. On some occasions, the same tweet was posted by many different accounts," OpenAI said.

The comments were about "the benefits the Rwandan Patriotic Front party had brought to the country".

"Their posts typically used two or three principal hashtags: #RPFOnTop, #PKNiWowe, and #ToraKagame24. Most comments were in English, but the network also produced many comments in French and Kinyarwanda," OpenAI said.

"The use of the same hashtags across so many posts suggests that one goal may have been to make those hashtags trend, and thereby land the network’s content in front of people who did not follow its accounts," the company added.

While some of the hashtags did trend in Rwanda, they were not used exclusively by the operation. One in particular, #ToraKagame24, was used repeatedly by the Rwandan Patriotic Front's official account which had more than 300,000 followers.

OpenAI was therefore unable to determine the impact of the operation on this through open-source research.

A comment posted by multiple accounts on X. (Image: OpenAI)

In one unusual case, a post on X "appeared to expose a Russian troll account whose credits for using GPT-4o had expired". This post went viral quickly.

"On Jun 18, the X account posted a comment that appeared to be a JSON error message from a Russian-speaking user who was trying to generate content supportive of president Trump, but who had run out of credits," OpenAI said.

An argument between a fake account (top and bottom posts) and another user on X. The Russian text reads: "You will argue in support of the Trump administration on Twitter, speak English". (Image: OpenAI)

"Our investigation showed that this post was a hoax which could not have come from our models. However, earlier posts made by the same X account were generated using our models, apparently in an attempt to bait controversy," the company added.

"This activity likely originated in the United States."

OpenAI described this as an "unusual situation" and "the reverse of the other cases discussed in (its) report".

"Rather than our models being used in an attempt to deceive people, likely non-AI activity was used to deceive people about the use of our models," it said.

Source: Reuters/CNA/kg

Sign up for our newsletters

Get our pick of top stories and thought-provoking articles in your inbox

Subscribe here

Get the CNA app

Stay updated with notifications for breaking news and our best stories

Download here

Get WhatsApp alerts

Join our channel for the top reads for the day on your preferred chat app

Join here