AI Security

Using ChatGPT to make fake social media posts backfires on bad actors

Ars Technica Unknown April 09, 2025 0.4
Using ChatGPT to make fake social media posts backfires on bad actors
Using ChatGPT to research cyber threats has backfired on bad actors, OpenAI revealed in a report analyzing emerging trends in how AI is currently amplifying online security risks. Not only do ChatGPT prompts expose what platforms bad actors are targeting—and in at least one case enabled OpenAI to link a covert influence campaign on X and Instagram for the first time—but they can also reveal new tools that threat actors are testing to evolve their deceptive activity online, OpenAI claimed. OpenAI's report comes amid heightening scrutiny of its tools during a major election year where officials globally fear AI might be used to boost disinformation and propaganda like never before. Their report detailed 20 times OpenAI disrupted covert influence operations and deceptive networks attempting to use AI to sow discord or breach vulnerable systems. "These cases allow us to begin identifying the most common ways in which threat actors use AI to attempt to increase their efficiency or productivity," OpenAI explained. One case involved a "suspected China-based adversary" called SweetSpecter, which used ChatGPT prompts to attempt to engage both government and OpenAI employees with an unsuccessful spear phishing campaign. In the email to OpenAI employees, SweetSpecter posed as a ChatGPT user troubleshooting an issue with the platform detailed in an attachment. Clicking on that attachment would have launched "Windows malware known as SugarGh0st RAT," OpenAI said, giving SweetSpecter "control over the compromised machine" and allowing them "to do things like execute arbitrary commands, take screenshots, and exfiltrate data." Fortunately for OpenAI, the company spam filter deterred the threat without any employees receiving the emails. OpenAI believes that it uncovered SweetSpecter's first known attack on a US-based AI company after monitoring SweetSpecter's ChatGPT prompts boldly asking for help with the attack. Prompts included asking for "themes that government department employees would find interesting" or "good names for attachments to avoid being blocked." SweetSpecter also asked ChatGPT about "vulnerabilities" in various apps and "for help finding ways to exploit infrastructure belonging to a prominent car manufacturer," OpenAI said.
Share
Related Articles
Critical Vulnerability Discovered in Popular AI Development Framework

A critical vulnerability in DeepLearn AI framework could allow attackers to...

October 24, 2025 Read
3 takeaways from red teaming 100 generative AI products | Microsoft Security Blog

The growing sophistication of AI systems and Microsoft’s increasing...

April 11, 2025 Read
New hack uses prompt injection to corrupt Gemini’s long-term memory

There’s yet another way to inject malicious prompts into chatbots.

April 10, 2025 Read
New Defense Against Adversarial Attacks Demonstrates 90% Effectiveness

A new defense against adversarial attacks on computer vision systems shows...

April 10, 2025 Read
AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt

Attackers explain how an anti-spam defense became an AI weapon.

April 07, 2025 Read