About
AI Hallucination Incident

AI Hallucination Incident

Tracking Ai Hallucination Incident legal and regulatory developments.

6 entries in Litigator Tracker

Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance

The Mississippi State Bar adopted formal ethics guidance on generative AI use that permits lawyers to reduce verification requirements when using legal-specific tools, provided they have prior experience with the system. Mississippi Ethics Opinion No. 267, adopted verbatim from ABA Formal Opinion 512 issued in July 2024, establishes baseline principles requiring lawyers to protect client confidentiality, use technology competently, verify outputs, bill reasonably, and obtain informed consent. The opinion's core permission—allowing "less independent verification or review" for familiar tools—has drawn sharp criticism for creating standards that contradict the ABA's own cited research.

New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case

Tyrone Blackburn, the attorney representing Liza Gardner in a sexual assault civil suit against Sean "Diddy" Combs, faces a contempt hearing in New Jersey federal court over unpaid sanctions tied to AI-generated case citations. U.S. District Judge Noel L. Hillman ordered Blackburn to pay $6,000 in December 2025—$500 monthly—after finding that a brief he filed contained a fabricated case opinion produced by an artificial intelligence research tool. The case cited did not exist.

Oregon Appellate Court Sanctions Lawyer with $10K Fine for AI-Hallucinated Brief Citations

The Oregon Court of Appeals has sanctioned Salem attorney William Ghiorso with a $10,000 fine for submitting an opening brief containing at least 15 fabricated case citations and 9 nonexistent quotations. The court attributed the errors to AI "hallucinations"—instances where generative AI generated convincing but false legal information. The penalty marks the first time an Oregon appellate court has considered attorney fees as a sanction alternative to fines, though it ultimately imposed the monetary penalty after Ghiorso implemented new safeguards.

Seven Families Sue OpenAI Over Suspect's ChatGPT Use in 2025 FSU Shooting

Seven families of victims from a 2025 Florida State University mass shooting have filed lawsuits against OpenAI, claiming the company negligently failed to alert law enforcement about the suspect's extensive ChatGPT interactions. The suits allege that Phoenix Ikner, the accused gunman now awaiting trial, maintained constant communication with the chatbot and may have received guidance on executing the attack. The families are pursuing negligence claims, arguing OpenAI breached its duty of care by failing to flag foreseeable harm despite the chatbot's design and the nature of the interactions.

LawSnap Briefing Updated May 11, 2026

State of play.

  • Sanctions for AI hallucinations have escalated from monetary fines to contempt proceedings. The New Jersey federal court has issued a show-cause order against attorney Tyrone Blackburn for failing to pay $6,000 in sanctions tied to a fabricated case citation in the Combs litigation — marking the shift from sanction imposition to contempt enforcement (→ New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case).
  • Case-dispositive consequences are now established at multiple court levels. The Alabama Supreme Court dismissed an appeal outright over AI-hallucinated briefs, and a Quebec court annulled an entire arbitral award after finding the arbitrator built the decision on fabricated citations — moving consequences beyond the attorney and onto the proceeding itself (→ Quebec Court Voids Arbitrator's Award Built on AI-Generated Fake Legal Citations).
  • Government lawyers are not insulated. Two New Orleans government attorneys resigned over fake AI citations, and the 7th Circuit admonished a former immigration judge for fake cases in a brief .
  • Supervising attorneys carry personal liability for staff AI use. ABA Formal Opinion 512 and state bar rules — including California's mandatory human-review requirements — place the verification obligation on the supervising lawyer, not the associate or staff member who ran the query (→ Supervising Attorneys Face Sanctions for Failing to Verify AI-Generated Legal Citations).
  • For counsel advising firms on AI governance, the practical baseline is that unpaid sanctions now trigger contempt, citation verification is a non-delegable professional obligation, and the consequences span contempt proceedings, case dismissal, arbitral annulment, six-figure sanctions, and suspension — not just fines.

Where things stand.

Latest developments.

Active questions and open splits.

What to watch.

mail Subscribe to AI Hallucination Incident email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap