AI Security

AI making up cases can get lawyers fired, scandalized law firm warns

Ars Technica Unknown March 30, 2025 0.0
AI making up cases can get lawyers fired, scandalized law firm warns
Morgan & Morgan—which bills itself as "America's largest injury law firm" that fights "for the people"—learned the hard way this month that even one lawyer blindly citing AI-hallucinated case law can risk sullying the reputation of an entire nationwide firm. In a letter shared in a court filing, Morgan & Morgan's chief transformation officer, Yath Ithayakumar, warned the firms' more than 1,000 attorneys that citing fake AI-generated cases in court filings could be cause for disciplinary action, including "termination." "This is a serious issue," Ithayakumar wrote. "The integrity of your legal work and reputation depend on it." Morgan & Morgan's AI troubles were sparked in a lawsuit claiming that Walmart was involved in designing a supposedly defective hoverboard toy that allegedly caused a family's house fire. Despite being an experienced litigator, Rudwin Ayala, the firm's lead attorney on the case, cited eight cases in a court filing that Walmart's lawyers could not find anywhere except on ChatGPT. These "cited cases seemingly do not exist anywhere other than in the world of Artificial Intelligence," Walmart's lawyers said, urging the court to consider sanctions. So far, the court has not ruled on possible sanctions. But Ayala was immediately dropped from the case and was replaced by his direct supervisor, T. Michael Morgan, Esq. Expressing "great embarrassment" over Ayala's fake citations that wasted the court's time, Morgan struck a deal with Walmart's attorneys to pay all fees and expenses associated with replying to the errant court filing, which Morgan told the court should serve as a "cautionary tale" for both his firm and "all firms." Reuters found that lawyers improperly citing AI-hallucinated cases have scrambled litigation in at least seven cases in the past two years. Some lawyers have been sanctioned, including an early case last June fining lawyers $5,000 for citing chatbot "gibberish" in filings. And in at least one case in Texas, Reuters reported, a lawyer was fined $2,000 and required to attend a course on responsible use of generative AI in legal applications. But in another high-profile incident, Michael Cohen, Donald Trump's former lawyer, avoided sanctions after Cohen accidentally gave his own attorney three fake case citations to help his defense in his criminal tax and campaign finance litigation.
Share
Related Articles
Critical Vulnerability Discovered in Popular AI Development Framework

A critical vulnerability in DeepLearn AI framework could allow attackers to...

October 27, 2025 Read
3 takeaways from red teaming 100 generative AI products | Microsoft Security Blog

The growing sophistication of AI systems and Microsoft’s increasing...

April 11, 2025 Read
New hack uses prompt injection to corrupt Gemini’s long-term memory

There’s yet another way to inject malicious prompts into chatbots.

April 10, 2025 Read
New Defense Against Adversarial Attacks Demonstrates 90% Effectiveness

A new defense against adversarial attacks on computer vision systems shows...

April 10, 2025 Read
Using ChatGPT to make fake social media posts backfires on bad actors

OpenAI claims cyber threats are easier to detect when attackers use ChatGPT.

April 09, 2025 Read