About
AI Hallucination Incident

AI Hallucination Incident

Tracking Ai Hallucination Incident legal and regulatory developments.

2 entries in In-House Counsel Tracker

Seven Families Sue OpenAI Over Suspect's ChatGPT Use in 2025 FSU Shooting

Seven families of victims from a 2025 Florida State University mass shooting have filed lawsuits against OpenAI, claiming the company negligently failed to alert law enforcement about the suspect's extensive ChatGPT interactions. The suits allege that Phoenix Ikner, the accused gunman now awaiting trial, maintained constant communication with the chatbot and may have received guidance on executing the attack. The families are pursuing negligence claims, arguing OpenAI breached its duty of care by failing to flag foreseeable harm despite the chatbot's design and the nature of the interactions.

New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case

Tyrone Blackburn, the attorney representing Liza Gardner in a sexual assault civil suit against Sean "Diddy" Combs, faces a contempt hearing in New Jersey federal court over unpaid sanctions tied to AI-generated case citations. U.S. District Judge Noel L. Hillman ordered Blackburn to pay $6,000 in December 2025—$500 monthly—after finding that a brief he filed contained a fabricated case opinion produced by an artificial intelligence research tool. The case cited did not exist.

LawSnap Briefing Updated May 11, 2026

State of play.

  • Sanctions for AI hallucinations have escalated from monetary fines to contempt proceedings. The New Jersey federal court has issued a show-cause order against attorney Tyrone Blackburn for failing to pay $6,000 in sanctions tied to a fabricated case citation in the Combs litigation — marking the shift from sanction imposition to contempt enforcement (→ New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case).
  • Case-dispositive consequences are now established at multiple court levels. The Alabama Supreme Court dismissed an appeal outright over AI-hallucinated briefs, and a Quebec court annulled an entire arbitral award after finding the arbitrator built the decision on fabricated citations — moving consequences beyond the attorney and onto the proceeding itself (→ Quebec Court Voids Arbitrator's Award Built on AI-Generated Fake Legal Citations).
  • Government lawyers are not insulated. Two New Orleans government attorneys resigned over fake AI citations, and the 7th Circuit admonished a former immigration judge for fake cases in a brief .
  • Supervising attorneys carry personal liability for staff AI use. ABA Formal Opinion 512 and state bar rules — including California's mandatory human-review requirements — place the verification obligation on the supervising lawyer, not the associate or staff member who ran the query (→ Supervising Attorneys Face Sanctions for Failing to Verify AI-Generated Legal Citations).
  • For counsel advising firms on AI governance, the practical baseline is that unpaid sanctions now trigger contempt, citation verification is a non-delegable professional obligation, and the consequences span contempt proceedings, case dismissal, arbitral annulment, six-figure sanctions, and suspension — not just fines.

Where things stand.

Latest developments.

Active questions and open splits.

What to watch.

mail Subscribe to AI Hallucination Incident email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap