Using ChatGPT to make fake social media posts backfires on bad actors
Ars Technica
Unknown
April 09, 2025
0.4
Summary
OpenAI claims cyber threats are easier to detect when attackers use ChatGPT.
Using ChatGPT to research cyber threats has backfired on bad actors, OpenAI revealed in a report analyzing emerging trends in how AI is currently amplifying online security risks.
Not only do ChatGPT prompts expose what platforms bad actors are targeting—and in at least one case enabled OpenAI to link a covert influence campaign on X and Instagram for the first time—but they can also reveal new tools that threat actors are testing to evolve their deceptive activity online, OpenAI claimed.
OpenAI's report comes amid heightening scrutiny of its tools during a major election year where officials globally fear AI might be used to boost disinformation and propaganda like never before. Their report detailed 20 times OpenAI disrupted covert influence operations and deceptive networks attempting to use AI to sow discord or breach vulnerable systems.
"These cases allow us to begin identifying the most common ways in which threat actors use AI to attempt to increase their efficiency or productivity," OpenAI explained.
One case involved a "suspected China-based adversary" called SweetSpecter, which used ChatGPT prompts to attempt to engage both government and OpenAI employees with an unsuccessful spear phishing campaign.
In the email to OpenAI employees, SweetSpecter posed as a ChatGPT user troubleshooting an issue with the platform detailed in an attachment. Clicking on that attachment would have launched "Windows malware known as SugarGh0st RAT," OpenAI said, giving SweetSpecter "control over the compromised machine" and allowing them "to do things like execute arbitrary commands, take screenshots, and exfiltrate data." Fortunately for OpenAI, the company spam filter deterred the threat without any employees receiving the emails.
OpenAI believes that it uncovered SweetSpecter's first known attack on a US-based AI company after monitoring SweetSpecter's ChatGPT prompts boldly asking for help with the attack.
Prompts included asking for "themes that government department employees would find interesting" or "good names for attachments to avoid being blocked." SweetSpecter also asked ChatGPT about "vulnerabilities" in various apps and "for help finding ways to exploit infrastructure belonging to a prominent car manufacturer," OpenAI said.