AI/ML

OpenAI Disrupts 20 Cyber Campaigns Using its AI Models

Share

OpenAI, since January, has disrupted more than 20 operations by threat groups, some linked to countries like China, Russia, and Iran, that were using the vendor’s large language models (LLMs) for nefarious purposes, from writing or debugging malware to posting content on social media sites.

The activities ranged from the basic to the complex, according to a 54-page report issued this week by the generative AI giant. Some of the bad actors looked to interfere with elections in the United States and elsewhere, while used the models for reconnaissance and other covert operations, compromising critical infrastructure entities, or running attacks like spearphishing campaigns – including against OpenAI employees.

Some of these threat groups also used ChatGPT and other models in their own development work to help with coding, debug their malware, or evade systems that detect anomalies, according to the report’s authors, OpenAI researchers Ben Nimmo and Michael Flossman.

Most bad actors saw OpenAI’s models as tools for particular stages of their operations, according to Nimmo and Flossman.

“We most often observed threat actors using our models to perform tasks in a specific, intermediate phase of activity – after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed ‘finished’ products such as social media posts or malware across the internet via a range of distribution channels,” they wrote.

They pointed to one group called Storm-0817, which used OpenAI models to debug their code, while another group, which OpenAI calls A2Z, used models to create biographies for social media accounts. A spamming network dubbed Bet Bot “used AI-generated profile pictures for its fake accounts on X. A number of the operations … used AI to generate long-form articles or short comments that were then posted across the internet,” Nimmo and Flossman wrote.

Playing the Middle Ground

Having bad actors use their technologies in the “intermediate positions” puts AI companies like OpenAI in a position to add to insights of both email and internets service providers upstream and distribution platforms like social media downstream. To do so, the AI companies need to have “appropriate detection Influence and investigation capabilities in place,” the wrote. “It can also allow AI companies to identify previously unreported connections between apparently different sets of threat activity.”

Since January, OpenAI disrupted four networks the included some election-related content, with only one in Rwanda that focused only on election issues. Others included other topics in the mix. A covert Iranian-based influence campaign created social media comments and long-form articles about the U.S. elections as well as such topics as the war in Gaza and Western countries’ Israeli policies.

Other elections targeted include those in the European Union, India, and Azerbaijan.

Threats from China, Iran

OpenAI also took a deeper dive on a few bad actors, including a China-based group called SweetSpecter that was using the company’s models for everything from reconnaissance and vulnerability research to help coding, anomaly detection evasion, and development. The group also was sending spearphishing messages to OpenAI employees posing as a ChatGPT user seeking the employees’ support.

Another group was CyberAv3ngers, which is linked to Iran’s Islamic Revolutionary Guard Corps (IRGC). CyberAv3ngers made headlines starting late last year when they used security flaws in programmable logical controllers (PLCs) to attack critical infrastructure like municipal water systems, manufacturing operations, and energy entities.

Another Iran-based threat group, Storm-0817, which was using AI models to debug malware, help develop code to build a basic scraper for Instagram, and translate LinkedIn profiles into Persian.

“In parallel, STORM-0817 used ChatGPT to support the development of server side code necessary to handle connections from compromised devices,” Nimmo and Flossman wrote.

Threat actors have been experimenting generative AI tools from OpenAI and other vendors for almost as long as ChatGPT has been around, with varying levels of success. AI models have helped some hackers create more convincing phishing lures, cleaning up the typos and grammatical errors that tip off people.

Generative AI-Based Threats Here to Stay

This shouldn’t come as a surprise, said Kevin McGrail, cloud fellow and principal evangelist at DitoWeb, a Google cloud security partner, noting that “only the good guys play by the rules.”

“There are also already GPT and GenAI offerings that do not have guard rails against misuse,” McGrail told MSSP Alert. “There are also tools that do not balance free speech and safety that purposefully have no guard rails such as X – previously Twitter – and HackerGPT. Bad actors are using AI technology against you and it's going to get worse.”

He pointed out that OpenAI’s research is fairly narrow, limited only to its platform and appearing to focus on lower levels and exploratory uses of generative AI by bad actors. In his work, McGrail said he sees “samples of spam, phishing, and malware from a large variety of industries.”

 There’s no doubt generative AI is being used and the use is growing, “but bad actors are less likely to use AI for the lower levels of attacks because the skill level of the bad actors at that stage doesn't typically invent new techniques,” he said. “Instead, they are using scams based on confidence scams that have been used for decades or even centuries.”

That said, generative AI use by threat actors is at a kindergarten level now, but it will graduate to the college level in a few months. McGrail and those like him are just discovering use cases, techniques, and useful data, he said.

Organizations that want to stop bad actors have to think like them so they can predict what the bad actors might do. That means experimenting with AI technology and using as much automation as possible.

“Bad actors attacks leveraging AI will just get faster and better so your defense must be evolving just as quickly,” McGrail said.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.