About
AI Hallucination Incident

AI Hallucination Incident

Tracking Ai Hallucination Incident legal and regulatory developments.

2 entries in Legal Intelligence Tracker

Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance

The Mississippi State Bar adopted formal ethics guidance on generative AI use that permits lawyers to reduce verification requirements when using legal-specific tools, provided they have prior experience with the system. Mississippi Ethics Opinion No. 267, adopted verbatim from ABA Formal Opinion 512 issued in July 2024, establishes baseline principles requiring lawyers to protect client confidentiality, use technology competently, verify outputs, bill reasonably, and obtain informed consent. The opinion's core permission—allowing "less independent verification or review" for familiar tools—has drawn sharp criticism for creating standards that contradict the ABA's own cited research.

New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case

Tyrone Blackburn, the attorney representing Liza Gardner in a sexual assault civil suit against Sean "Diddy" Combs, faces a contempt hearing in New Jersey federal court over unpaid sanctions tied to AI-generated case citations. U.S. District Judge Noel L. Hillman ordered Blackburn to pay $6,000 in December 2025—$500 monthly—after finding that a brief he filed contained a fabricated case opinion produced by an artificial intelligence research tool. The case cited did not exist.

LawSnap Briefing Updated May 11, 2026

State of play.

  • Sanctions for AI hallucinations have escalated from monetary fines to contempt proceedings. The New Jersey federal court has issued a show-cause order against attorney Tyrone Blackburn for failing to pay $6,000 in sanctions tied to a fabricated case citation in the Combs litigation — marking the shift from sanction imposition to contempt enforcement (→ New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case).
  • Case-dispositive consequences are now established at multiple court levels. The Alabama Supreme Court dismissed an appeal outright over AI-hallucinated briefs, and a Quebec court annulled an entire arbitral award after finding the arbitrator built the decision on fabricated citations — moving consequences beyond the attorney and onto the proceeding itself (→ Quebec Court Voids Arbitrator's Award Built on AI-Generated Fake Legal Citations).
  • Government lawyers are not insulated. Two New Orleans government attorneys resigned over fake AI citations, and the 7th Circuit admonished a former immigration judge for fake cases in a brief .
  • Supervising attorneys carry personal liability for staff AI use. ABA Formal Opinion 512 and state bar rules — including California's mandatory human-review requirements — place the verification obligation on the supervising lawyer, not the associate or staff member who ran the query (→ Supervising Attorneys Face Sanctions for Failing to Verify AI-Generated Legal Citations).
  • For counsel advising firms on AI governance, the practical baseline is that unpaid sanctions now trigger contempt, citation verification is a non-delegable professional obligation, and the consequences span contempt proceedings, case dismissal, arbitral annulment, six-figure sanctions, and suspension — not just fines.

Where things stand.

Latest developments.

Active questions and open splits.

What to watch.

mail Subscribe to AI Hallucination Incident email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap