Jun 27, 2025 / Insight

Pink Citations On Parade

By: Stephen Kelly and Daniel A. Corren, Cypress LLP

As law firms are inundated with marketing pitches for “AI-powered” tools promising to revolutionize their practice, the legal profession continues to grapple with the real-world consequences of unvetted AI use – most notably, fabricated case citations appearing in court filings. These incidents have led to judicial sanctions and professional embarrassment, raising serious questions about attorneys’ duty of candor and the consequences of repeat offenses.  

Two recent California cases highlight these risks.  

Concord Music Group, Inc. et al v. Anthropic PBC, N.D. Cal., 5:24-cv-03811  

Filed 5/23/25, Docket No. 377 

In Concord, on a motion to compel Anthropic to produce certain documents, Anthropic’s attorneys at Latham & Watkins used their client’s AI tool, Claude.ai, to assist them with drafting an expert’s declaration. After that declaration was found to contain a fabricated case citation, the Latham attorneys claimed they provided the AI tool with verified case citation and asked it to create a “properly formatted legal citation.” Instead, the tool returned the fabricated case citation, an issue that is commonly referred to as a “hallucination” of an AI system. Plaintiffs struck back, arguing that Latham’s admissions “fatally undermine the reliability of the [expert declaration]” and asked for the Court to strike the declaration in full.  

Latham’s misguided reliance on Claude was particularly germane given the issues in dispute in the case: 1) the lawsuit itself arises from allegations that Claude’s responses to prompts regarding song lyrics showed that Claude trained on copyrighted materials ; and 2) the declaration was part of an attempt to limit the scope of sample data of Claude’s prompt and output records to be produced by Claude in discovery.  

In its order compelling Anthropic to produce 5 million prompt-output pairs for inspection, the Court expressed skepticism that Latham actually performed a “manual citation check” to screen for hallucinations. And while it stopped short of granting sanctions, the Court struck the part of the expert’s declaration containing the error and noted “that this issue undermines the overall credibility of [the expert’s] declaration, a factor in the Court’s conclusion.”  

Latham’s reliance on AI and failure to catch the mistake highlights a broader issue at play: Can the outputs of AI tools be trusted in legal proceedings, particularly when those tools are alleged to be defective or unlawful? This incident may shape future judicial attitudes toward the admissibility and weight of AI-assisted submissions and could catalyze new professional standards or rules governing attorneys’ use of AI in litigation. 

Jacquelyn Lacey v. State Farm General Insurance Co., C.D. Cal, 2:24-cv-05205  

Filed 5/6/25, Docket No. 119 

 Plaintiff’s attorneys submitted a supplemental brief containing nine (out of a total of 27) citations that were inaccurate, including two that were completely fabricated. After the special master noticed the errors, the offending attorneys admitted that the attorney who prepared the original outline of the brief used AI tools, including Thomson Reuters’ CoCounsel and Westlaw Precision. Compounding the error, the offending attorney failed to disclose his use of generative AI to his colleagues or co-counsel, none of whom checked or verified the citations prior to filing.  

In an order granting sanctions, the special master, Hon. Michael R. Wilner (Ret.) sanctioned Plaintiff’s attorneys a total of $31,100, struck their supplemental brief, and declined to grant any further relief on the discovery issue that necessitated his involvement in the first place. His reasoning speaks for itself: 

Directly put, Plaintiff’s use of AI affirmatively misled me. I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order. Strong deterrence is needed to make sure that attorneys don’t succumb to this easy shortcut. 

Conclusion 

We are assaulted by emails and pop ups in productivity software extolling the revolutionary virtues of generative AI. These cases make clear though that these tools are assistive—not a surrogate for professional judgment—and its use demands transparency, oversight, and accountability, especially in contexts where others may rely on its outputs. 

It must be used responsibly, with careful supervision, and never without disclosure to others who may otherwise rely innocently but fatally on its hallucinatory output.