About
AI Professional Ethics

AI Professional Ethics

Tracking Ai Professional Ethics legal and regulatory developments.

6 entries in In-House Counsel Tracker

From Human-in-the-Loop to Human-at-the-Helm: Navigating the Ethics of Agentic AI

The legal profession is shifting from reactive oversight of AI systems to proactive governance designed for autonomous tools. As artificial intelligence has evolved from generative systems that produce text on demand to agentic systems capable of independent action—sending emails, populating filings, modifying records—the traditional model of lawyers reviewing AI output after completion has become inadequate. Legal ethics experts are now calling for "human-at-the-helm" governance that establishes parameters and controls what AI is permitted to do before it acts, rather than inspecting results afterward.

Law Firms Urged to Educate Staff on AI Amid Client Pressures

Law firms are hemorrhaging money on artificial intelligence tools they don't understand and won't use, according to analysis published May 4, 2026, in Above the Law and Tech Law Crossroads. Firms facing client pressure to deploy AI are panic-buying software without first establishing internal competency—resulting in wasted spending, abandoned platforms, and disappointed clients. The core problem: decision-makers lack basic literacy on how AI actually works, what it can and cannot do, and which tools fit specific practice needs. The recommended fix is straightforward: mandatory education on AI fundamentals for lawyers, firm leadership, and business development staff before any vendor selection or client pitch.

Clio Report: 71% of Small Law Firms Use AI, But Revenue Growth Lags Larger Competitors

Clio's 2026 Legal Trends report exposes a widening performance gap between small law firms and their larger competitors despite widespread AI adoption. While 71% of solo practitioners and 75% of small firms now use AI tools, fewer than 33% have increased revenues—a sharp contrast to enterprise firms where nearly 60% report revenue growth tied to AI implementation.

AI Disrupts Law Firm Billable Hour Model, Boosting Efficiency

Legal AI tools are reshaping law firm economics. Document review, drafting, and research are now 60–70% faster, with individual attorneys expected to save 190–240 billable hours annually. Thomson Reuters' 2025 Future of Professionals Report quantifies this as $20–32 billion in time savings across the U.S. market. Major clients—Meta, Zscaler, UBS—are already demanding "AI discounts" and refusing to pay for work automatable by machine. The pressure is immediate and client-driven.

Falcon Rappaport & Berkman Opens Newark AI-Native Law Office

Falcon Rappaport & Berkman has opened a dedicated Newark office at 3 Gateway Center designed as an AI-native incubator for the firm. The office will develop agentic AI tools to enhance client and attorney services across all practice areas, operating as the operational hub for the firm's artificial intelligence capabilities.

LawSnap Briefing Updated May 10, 2026

State of play.

  • Agentic AI has forced a governance model shift from reactive review to pre-deployment controls. Legal ethics commentary now frames the operative standard as "human-at-the-helm" — establishing parameters before autonomous action, not inspecting outputs after — with the EU AI Act and NIST AI Risk Management Framework increasingly cited as the regulatory backdrop (→ From Human-in-the-Loop to Human-at-the-Helm: Navigating the Ethics of Agentic AI).
  • California is moving to convert advisory AI guidance into disciplinary-enforceable rules. COPRAC has proposed amendments to six Rules of Professional Conduct requiring independent verification of every AI output with no exceptions for routine matters — and has been directed to examine agentic AI implications as a next step .
  • Client-side privilege waiver through consumer AI is now a documented judicial risk. United States v. Heppner (S.D.N.Y.) held that documents a client generated using a public AI platform are not privileged, and advisory guidance is now explicitly warning clients against uploading privileged materials to ChatGPT or Claude (→ Articles Warn Clients Against Feeding Privileged Docs to Consumer AI).
  • ABA Formal Opinion 512 is the governing ethics baseline, but courts are imposing stricter standards through sanctions. The opinion's permission to reduce verification for "familiar" tools contradicts a Stanford study — cited in the opinion itself — finding legal AI hallucination rates of 17 to 33 percent (→ Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance).
  • For counsel advising law firms or managing litigation teams, the practical baseline is that California's proposed binding verification rule signals where state bars are heading, ethics opinions set a permissive floor that courts are already exceeding through sanctions, and the client-side privilege risk now requires explicit engagement protocols before clients touch any public AI platform with sensitive materials.

Where things stand.

  • ABA Formal Opinion 512 (July 2024) is the national ethics baseline. It requires technological competence under Model Rule 1.1, confidentiality protection, output verification, reasonable billing, and informed consent — and extends Rule 5.3 supervisory responsibility to AI-generated work product (→ Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance, Supervising Attorneys Face Sanctions for Failing to Verify AI-Generated Legal Citations).
  • State bars are adopting Opinion 512 verbatim, compounding its gaps. Mississippi Ethics Opinion No. 267 was adopted verbatim from Opinion 512, including the contested permission to reduce verification for familiar tools — a pattern likely to repeat in other jurisdictions (→ Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance).
  • California is the leading jurisdiction converting advisory guidance into binding rules. COPRAC's proposed amendments to six Rules of Professional Conduct would impose enforceable verification obligations with no low-stakes exceptions, transforming the State Bar's November 2023 practical guidance into disciplinary-enforceable standards; the public comment period has closed and the proposal awaits final adoption .
  • Fake-citation sanctions are a documented pattern across government and private practice. A Georgia prosecutor was suspended for AI-generated fake citations in a murder appeal; two New Orleans government attorneys resigned over the same issue; a Massachusetts attorney faced discipline; Flycatcher Corp. v. Affable Avenue produced a default judgment; and the 7th Circuit admonished a former immigration judge for citing fabricated cases (→ Supervising Attorneys Face Sanctions for Failing to Verify AI-Generated Legal Citations).
  • The privilege framework for AI-generated materials is unsettled at the district court level. SDNY's Heppner ruling turns on two factors — absence of attorney direction and public-platform confidentiality gaps — while E.D. Michigan's Warner v. Gilbarco found privilege intact where attorneys directed AI use without adversarial disclosure .
  • Agentic AI governance is emerging as the next compliance frontier. The shift from generative to agentic systems — tools that send emails, populate filings, and modify records autonomously — renders post-hoc review inadequate; tiered risk management with pre-deployment controls is the framework now being advocated, with significant governance gaps remaining around data access sprawl and permission accumulation (→ From Human-in-the-Loop to Human-at-the-Helm: Navigating the Ethics of Agentic AI).
  • Discovery workflows face a judicially imposed human-judgment floor. White v. Walmart (S.D. Ind., April 14, 2026) established that AI cannot satisfy the attorney's independent obligation to review, narrow disputes, and meet and confer in good faith (→ Indiana Judge Rules AI Cannot Substitute for Attorney Review in Discovery).
  • EDRM has published an embedded-safeguards framework as a competence benchmark. The guidance argues that training alone is insufficient — safeguards must be built into tools and must function under high-pressure conditions; Thomson Reuters is marketing "fiduciary-grade" AI as a response to this standard (→ EDRM Advocates Embedded AI Safeguards in Legal Tools for Competence Under Pressure).
  • Judicial AI adoption is broad. A Northwestern study found over 60 percent of surveyed federal judges report using AI in their work, raising questions about disclosure norms and the judiciary's own governance obligations .

Latest developments.

Active questions and open splits.

What to watch.

  • Whether the California Supreme Court adopts COPRAC's proposed amendments and whether the final rule retains the no-exceptions verification standard — the first binding state ethics rule on AI will set the national benchmark.
  • Whether any circuit court takes up the Heppner/Warner privilege split — the first appellate ruling will set the standard for platform selection and client counseling firm-wide.
  • Whether additional state bars adopt Opinion 512 verbatim or begin diverging with stricter verification requirements in response to both the sanctions pattern and California's rulemaking.
  • Whether bar disciplinary bodies or courts begin specifying what "human-at-the-helm" governance for agentic AI must look like in practice — the governance gap is currently self-defined by firms.
  • Whether the White v. Walmart holding extends to other discovery contexts — privilege logging, document review, proportionality analysis — in follow-on decisions.
  • Whether client-side privilege waiver incidents — driven by consumer AI use without counsel direction — begin generating malpractice claims against firms that failed to instruct clients on platform risks.

mail Subscribe to AI Professional Ethics email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap