ChatGPT Hallucinates Six Fake Case Citations Submitted in Federal Court

What happened
A New York attorney submitted a legal brief in federal court that included six citations to cases fabricated entirely by ChatGPT. The cases had plausible-sounding names and case numbers but did not exist. The judge sanctioned the attorneys after they failed to verify the citations before filing.[1]
What went wrong
The attorney used ChatGPT to research case law and did not independently verify whether the cited cases existed. ChatGPT confidently presented invented case citations in a natural, authoritative tone indistinguishable from real citations. The firm failed to have a second attorney review the brief.[1]
Lesson learned
Large language models hallucinate facts with equal confidence to accurate ones. Legal citations, medical references, and any verifiable factual claims from LLMs must be independently verified against primary sources. The fluency of AI output is not a signal of accuracy.
Sources
- [1]
External links can go dark — pages move, paywalls appear, domains expire. Every source above includes a Wayback Machine snapshot link as a fallback. All citations are best-effort research; if a source contradicts our summary, the primary source takes precedence.