About
AI Attorney Accountability

AI Attorney Accountability

Tracking Ai Attorney Accountability legal and regulatory developments.

4 entries in Tech Counsel Tracker

Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance

The Mississippi State Bar adopted formal ethics guidance on generative AI use that permits lawyers to reduce verification requirements when using legal-specific tools, provided they have prior experience with the system. Mississippi Ethics Opinion No. 267, adopted verbatim from ABA Formal Opinion 512 issued in July 2024, establishes baseline principles requiring lawyers to protect client confidentiality, use technology competently, verify outputs, bill reasonably, and obtain informed consent. The opinion's core permission—allowing "less independent verification or review" for familiar tools—has drawn sharp criticism for creating standards that contradict the ABA's own cited research.

New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case

Tyrone Blackburn, the attorney representing Liza Gardner in a sexual assault civil suit against Sean "Diddy" Combs, faces a contempt hearing in New Jersey federal court over unpaid sanctions tied to AI-generated case citations. U.S. District Judge Noel L. Hillman ordered Blackburn to pay $6,000 in December 2025—$500 monthly—after finding that a brief he filed contained a fabricated case opinion produced by an artificial intelligence research tool. The case cited did not exist.

LawSnap Briefing Updated May 10, 2026

State of play.

  • Sanctions for AI-generated fake citations have escalated from warnings to career-ending consequences. The Sixth Circuit has imposed six-figure sanctions and removed an attorney for "inexcusable" AI transgressions; a Pennsylvania federal judge imposed a $5,000 fine plus mandatory AI ethics coursework; California's State Bar suspended one attorney and charged two more; and a Florida appeals court has referred a divorce attorney to the Florida Bar .
  • The privilege waiver risk from consumer AI use is now judicially established — with a direct circuit split in the making. Judge Rakoff's ruling in United States v. Heppner (S.D.N.Y.) found that a defendant's unsupervised use of Claude destroyed privilege over 31 strategy documents; a Michigan magistrate reached the opposite conclusion in Warner v. Gilbarco, treating ChatGPT as a neutral tool (→ SDNY Rules AI Tools Waive Privilege in US v. Heppner).
  • California is moving from advisory guidance to binding, disciplinary-enforceable AI rules. COPRAC has proposed amendments to six Rules of Professional Conduct requiring independent verification of every AI output — no exceptions for routine matters — and has directed examination of agentic AI implications .
  • The ABA's own ethics guidance is under fire for internal contradiction. ABA Formal Opinion 512 — adopted verbatim by Mississippi as Ethics Opinion No. 267 — permits reduced verification for "familiar" tools while citing a Stanford study finding hallucination rates of 17–33% in leading legal AI platforms (→ Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance).
  • For counsel advising law firms, in-house legal departments, or individual practitioners, the practical baseline is: verification of AI outputs is a non-delegable professional obligation, consumer AI platforms carry active privilege-waiver risk, California's proposed binding rules are the leading indicator for national standards, and bar discipline is now a live enforcement vector alongside judicial sanctions.

Where things stand.

  • ABA Formal Opinion 512 is the governing ethics framework — and its internal tension is the central advisory problem. Issued July 2024, it requires competence, confidentiality protection, output verification, reasonable billing, and informed consent for AI use, but permits "less independent verification" for familiar tools — a permission critics argue is contradicted by the hallucination data the opinion itself cites (→ Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance).
  • Judicial sanctions for AI hallucinations span multiple circuits and state bars. The Sixth Circuit, multiple Pennsylvania federal judges, the California State Bar, and Florida appellate courts have all moved from cautionary language to enforcement — suspensions, six-figure sanctions, mandatory training, and bar referrals (→ Supervising Attorneys Face Sanctions for Failing to Verify AI-Generated Legal Citations).
  • The privilege-waiver doctrine for consumer AI use is unsettled at the trial court level. Heppner (S.D.N.Y.) treats AI as a non-attorney third party whose permissive terms of service destroy confidentiality; Warner v. Gilbarco (E.D. Mich.) treats AI as a neutral tool like a word processor. No appellate court has resolved the split (→ SDNY Rules AI Tools Waive Privilege in US v. Heppner).
  • Supervising attorney liability under Model Rule 5.3 is an active enforcement theory. Courts and bar authorities are holding supervising lawyers — not just the filing attorney — responsible for AI output that reaches courts without adequate review (→ Supervising Attorneys Face Sanctions for Failing to Verify AI-Generated Legal Citations).
  • Client-side AI use is an established client management and privilege-counseling obligation. The Heppner ruling and advisory guidance from Varnum LLP and the National Law Review establish that clients uploading privileged documents to consumer AI platforms waive privilege — a risk firms must proactively address in client communications (→ Articles Warn Clients Against Feeding Privileged Docs to Consumer AI, SDNY Rules AI Tools Waive Privilege in US v. Heppner).
  • Hallucination rates documented in third-party research remain high. Stanford research found hallucination rates of 17–33% in leading legal AI platforms; a separate Stanford study found rates of 58–88% across state-of-the-art models answering direct legal questions (→ Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance).
  • ALSPs are positioning as lower-risk AI testing environments. The ALSP sector — valued at $28.5 billion with an 18% CAGR — is absorbing AI experimentation that firms cannot safely run client-side, with 16 state bar associations and the EU establishing regulatory sandboxes for controlled testing (→ ALSPs Position Themselves as Controlled Testing Grounds for Legal AI).
  • Competence doctrine is expanding to cover real-time litigation technology. Live transcription, AI-assisted deposition analysis, and remote expert observation are reshaping what courts expect from prepared counsel — adding technological proficiency as a measurable component of Rule 1.1 compliance .

Latest developments.

Active questions and open splits.

  • Is consumer AI use a privilege-destroying disclosure? Heppner says yes — Anthropic's permissive privacy policy and AI's non-attorney status destroy confidentiality; Warner v. Gilbarco says no — AI is a neutral tool like a word processor. No appellate court has resolved this, and the question is now live in every matter where clients or counsel use consumer AI platforms (→ SDNY Rules AI Tools Waive Privilege in US v. Heppner).
  • Does the "agent" exception preserve privilege for lawyer-directed client AI use? Heppner left open whether a lawyer directing a client's AI use — analogous to engaging an accountant — could preserve privilege. No court has tested this exception, and its scope across practice areas is undefined .
  • What verification standard satisfies Rule 1.1 competence for AI outputs? ABA Opinion 512 permits reduced verification for familiar tools; California's proposed rules reject that permission entirely, requiring independent review of every output. Courts imposing sanctions have not articulated a positive standard — only that unverified filing is insufficient. The gap between the ABA guidance floor and the California proposed ceiling is now the central advisory question (→ Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance, EDRM Advocates Embedded AI Safeguards in Legal Tools for Competence Under Pressure).
  • What distinguishes sanctionable from non-sanctionable AI citation errors? The Mostafavi no-sanction ruling in California conflicts with escalating penalties elsewhere. Courts have not articulated what good-faith error looks like versus culpable reliance — leaving practitioners without a clear safe harbor .
  • How far does supervising attorney liability extend under Rule 5.3? Courts are holding supervisors accountable for subordinate AI use, but the contours — what oversight is required, at what frequency, for which tools — remain undefined. Firms with tiered associate/partner review structures face the most immediate exposure (→ Supervising Attorneys Face Sanctions for Failing to Verify AI-Generated Legal Citations).
  • Does client-side AI use trigger a proactive counseling obligation? The Heppner ruling and advisory guidance suggest counsel must affirmatively instruct clients not to input privileged materials into consumer AI. Whether failure to do so creates independent malpractice exposure — separate from the privilege waiver itself — is unresolved (→ Articles Warn Clients Against Feeding Privileged Docs to Consumer AI, SDNY Rules AI Tools Waive Privilege in US v. Heppner).
  • Will California's proposed binding rules — and SB 574 — set a mandatory verification standard that other states adopt? The COPRAC proposal would be the first binding state ethics rule codifying AI verification obligations with no reduced-scrutiny exception; SB 574 would add a statutory layer restricting client data in public AI tools. If either or both are finalized, they become the national compliance benchmark .

What to watch.

  • California Supreme Court action on the COPRAC proposed amendments — whether the no-exceptions verification standard survives or is modified, and the timeline to final adoption.
  • Appellate review of the Heppner/Warner split — whether any circuit takes up the question of whether consumer AI use constitutes a privilege-destroying third-party disclosure.
  • California Supreme Court final determinations on Khalifeh and Romeyn — the first state bar disbarment proceedings directly tied to AI hallucination conduct will set the disciplinary severity benchmark.
  • Whether DOJ OPR issues guidance or policy following the Renfer resignation — government-practice AI governance is currently ad hoc, and an OPR opinion would reshape federal practice standards.
  • Additional state bar formal opinions following Mississippi's verbatim adoption of ABA Opinion 512 — whether bars modify the "familiar tool" reduced-verification permission in response to California's stricter proposed standard.
  • EDRM and Thomson Reuters "fiduciary-grade" AI rollout — whether embedded-safeguard tools gain market adoption fast enough to become the de facto competence baseline courts reference in sanctions analysis.

mail Subscribe to AI Attorney Accountability email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap