About
Health Care

Health Care

Tracking Health Care legal and regulatory developments.

9 entries in In-House Counsel Tracker

Fashion, Beauty, Wearable Brands Face Stricter 2026 Privacy Rules

Fashion, beauty, and wearable technology companies face a fundamentally reshaped data privacy regime in 2026. New omnibus consumer privacy laws in California, Connecticut, Indiana, Kentucky, Rhode Island, Washington, and Nevada—combined with the EU's AI Act and heightened FTC enforcement—have elevated privacy from a compliance checkbox to a core product and marketing consideration. The shift is driven by three specific regulatory pressures: biometric data (facial mapping and body scanning in virtual try-on tools) now classified as sensitive personal information; consumer health data from wearables tracking stress, sleep, and menstrual cycles, regulated outside HIPAA by states including Connecticut and Washington; and strengthened children's privacy protections through state laws and California's Age-Appropriate Design Code. Class-action litigants are simultaneously challenging tracking and cookie practices under state wiretap statutes like California's CIPA.

DOJ Joins xAI Lawsuit to Block Colorado AI Anti-Discrimination Law[1][2][7]

xAI filed a federal lawsuit on April 9, 2026, in Denver challenging Colorado's SB24-205, the nation's first comprehensive AI regulation law. The statute requires developers and deployers of "high-risk" AI systems to prevent algorithmic discrimination, conduct bias assessments, provide transparency notices, and monitor systems used in hiring, housing, and healthcare. The law takes effect June 30, 2026. xAI argues the statute violates the First Amendment by compelling ideological conformity—specifically forcing changes to Grok's outputs on racial justice topics—and is unconstitutionally vague and burdensome.

Data as Value – and Risk: Litigation Issues Facing Technology Providers and Their Customers

Organizations across all sectors are facing a wave of litigation over their data practices and AI systems. According to a Baker Donelson report, these legal challenges now extend well beyond technology companies and data brokers to affect organizations of every size that rely on data for operations, network security, regulatory compliance, and contractual obligations. The disputes involve civil liberties groups, workers' advocates, and privacy organizations pursuing claims centered on data privacy violations, algorithmic bias, unauthorized data use, AI system liability, and worker surveillance.

SimplePractice CLO Uses AI Exercise to Combat Employee Resistance

Ali Hartley, Chief Legal Officer at SimplePractice, ran a 30-minute team exercise where employees used AI tools to design a cafe menu. The exercise was designed to shift her team's perception of AI from skepticism and fear to viewing it as a creative tool for innovation. The team included people with varying technical backgrounds—former software developers alongside employees with no prior ChatGPT experience.

CT AG Tong Issues Feb. 25 Memo Applying Existing Laws to AI

Connecticut Attorney General William Tong issued a memorandum on February 25, 2026, clarifying how existing state law applies to artificial intelligence systems. The advisory targets four enforcement areas: civil rights laws prohibiting AI-driven discrimination in hiring, housing, lending, insurance, and healthcare; the Connecticut Data Privacy Act, which requires companies to disclose AI use, obtain consent for sensitive data collection, minimize data retention, conduct protection assessments for high-risk AI processing, and honor consumer deletion rights even within trained models; data safeguards and breach notification requirements; and the Connecticut Unfair Trade Practices Act and antitrust laws, which address deceptive AI claims, fake reviews, robocalls, and algorithmic price-fixing. The memorandum applies broadly to any business deploying AI in consequential decisions and specifically references harms including AI-generated nonconsensual imagery on platforms like xAI's Grok.

White House Releases 2026 National AI Policy Framework on March 20

On March 20, 2026, the White House released the National Policy Framework for Artificial Intelligence, proposing federal legislation to preempt state laws that impose "undue burdens" on AI deployment. The framework aims to establish uniform national standards for AI governance across sectors, particularly healthcare, where the technology is rapidly expanding into clinical decision support, diagnostics, and administrative workflows. The initiative follows a December 2025 Executive Order directing the administration to develop coordinated federal policy. Implementation would distribute oversight among existing agencies—the FDA, CMS, HHS, OCR, FTC, and DOJ—rather than creating a new regulatory body. The Department of Commerce would evaluate conflicting state laws.

What Your AI Knows About You

AI systems are now inferring sensitive personal data from seemingly innocuous user inputs—without ever directly collecting that information. This capability has triggered a regulatory cascade across states and federal agencies. California activated three transparency laws on January 1, 2026 (AB 566, AB 853, and SB 53), requiring AI developers to disclose training data sources and implement opt-out mechanisms for automated decision-making by January 2027. Colorado's AI Act takes effect in two phases: February 1 and June 30, 2026, mandating high-risk AI assessments. The EU's AI Act reaches full implementation in August 2026. Meanwhile, the FTC amended COPPA on April 22, 2026, tightening protections for children's data in AI contexts. State attorneys general have begun enforcement actions, and law firms including Baker McKenzie are flagging a critical shift: liability for data misuse now rests with companies deploying AI systems, not just those collecting raw data.

Tom Fox's Podcast Highlights 5 Key AI Healthcare Stories for Week Ending May 8, 2026

A state attorney general has sued an unnamed AI company after its chatbot impersonated a doctor and misled patients, according to reporting from HealthExec. The lawsuit marks the first major enforcement action targeting deceptive AI practices in clinical settings and arrives as healthcare organizations rapidly deploy AI tools across diagnostics, drug development, and patient communications.

Federal Circuit Rules Patent Disclosures Bar Trade Secret Claims in Elist Penuma Case

The Federal Circuit reversed a jury verdict in International Medical Devices, Inc. v. Cornell, holding that cosmetic penile implant designs alleged as trade secrets were not protectable under California law because they had been disclosed in publicly available patents. The court found the designs "generally known" and therefore ineligible for trade secret status. A fourth alleged secret—a list of surgical instruments sent via email without confidentiality markings—also failed protection due to insufficient secrecy measures. The panel reversed findings of trade secret misappropriation, breach of contract under the parties' nondisclosure agreement, and improper inventorship claims related to two Penuma patents. The court affirmed $1 million in statutory damages for trademark counterfeiting.

LawSnap Briefing Updated May 11, 2026

State of play.

  • Enforcement has moved from policy to litigation. A state AG has filed the first major enforcement action targeting a chatbot that impersonated a physician in a clinical setting, treating deceptive AI conduct as consumer fraud (→ Tom Fox's Podcast Highlights 5 Key AI Healthcare Stories for Week Ending May 8, 2026). Class actions against Sutter Health and MemorialCare over ambient AI scribe deployments follow a November 2025 case against Sharp HealthCare, establishing a pattern of wiretapping and consent-based claims against health systems — not vendors .
  • The federal preemption play is live but unresolved. The White House's March 2026 National Policy Framework proposes legislation to preempt state AI laws imposing "undue burdens," distributing oversight across FDA, CMS, HHS, OCR, FTC, and DOJ — but no preemptive statute has passed, and over 177 state bills remain active across 31 states (→ White House Releases 2026 National AI Policy Framework on March 20).
  • Genetic data from M&A is the next class action frontier. Tempus AI faces multi-state class actions alleging it transferred genetic data from over one million Ambry Genetics patients — without consent — to train AI models and license to more than 70 pharma and biotech partners under agreements valued above $1.1 billion .
  • The Second Circuit has narrowed insurer fraud defenses in no-fault reimbursement. The panel held that anti-kickback violations do not automatically disqualify providers from no-fault eligibility under New York law, certifying the core question to the New York Court of Appeals — affecting hundreds of pending cases and more than $1 billion in annual reimbursements (→ 2nd Cir. Vacates GEICO Win in NY No-Fault Kickback Case).
  • For counsel advising health systems, digital health platforms, or life sciences companies, the practical baseline is simultaneous exposure across three vectors: AI deployment consent failures generating wiretapping and HIPAA-adjacent liability, acquisition-triggered genetic data claims, and a federal preemption framework that may or may not arrive before state enforcement accelerates further.

Where things stand.

  • AI scribe consent litigation is a pattern, not an isolated case. The Sutter Health/MemorialCare class action alleges CMIA, CIPA, and Federal Wiretap Act violations, with plaintiffs pointing to false chart documentation of consent — and Abridge, the vendor, is not named as a defendant, placing institutional liability squarely on the health system . The Sharp HealthCare case filed November 2025 established the template.
  • State AG enforcement is treating deceptive AI conduct as consumer fraud. The first enforcement action against a chatbot impersonating a physician signals that regulators will not wait for AI-specific statutes — existing consumer protection authority is the vehicle (→ Tom Fox's Podcast Highlights 5 Key AI Healthcare Stories for Week Ending May 8, 2026).
  • Federal AI governance is distributed, not centralized. The March 2026 National Policy Framework routes oversight through existing agencies rather than a new body; the Department of Commerce would evaluate conflicting state laws; compliance sandboxes are proposed but not yet operative (→ White House Releases 2026 National AI Policy Framework on March 20).
  • Genetic data de-identification is contested doctrine. Tempus AI's defense that transferred Ambry data was de-identified faces the plaintiffs' argument that genetic information is inherently identifiable — a question courts have not yet resolved under state genetic privacy statutes .
  • AI drug discovery is compressing timelines and creating IP gaps. A Nature Reviews Drug Discovery review by Pun et al. documents AI embedding patentability and competitor analysis into target selection; the FDA fast-tracked 12 AI-identified oncology drugs in 2024; premature patents on unvalidated candidates and inventorship gaps under EPC Article 81 and U.S. law remain unresolved (→ Pun et al. review integrates patent analysis into AI drug target selection frameworks[1][2]).
  • Consumer-facing AI health platforms are proliferating without a settled regulatory framework. Microsoft Copilot Health, launched March 2026, aggregates EHR, lab, and wearable data for roughly 50,000 U.S. hospital-connected organizations; Microsoft's commitment not to use Copilot Health data for model training is positioned as a voluntary benchmark, not a legal requirement .
  • The FTC under reconstituted Republican leadership has signaled enforcement focus on hidden fees, dark patterns, and subscription traps across healthcare and digital platforms. Following the 2025 dismissals of Democratic commissioners, Chairman Ferguson has outlined a shift toward fraud redress over structural antitrust .
  • No-fault reimbursement enforcement authority has shifted toward state regulators. The Second Circuit's March 2026 decision constrains insurers' unilateral denial of claims based on provider misconduct, leaving GEICO's fraud and RICO theories for further proceedings pending New York Court of Appeals resolution (→ 2nd Cir. Vacates GEICO Win in NY No-Fault Kickback Case).

Latest developments.

Active questions and open splits.

  • Whether health systems or vendors bear liability for AI scribe consent failures. The Sutter/MemorialCare complaint names only the health systems, not Abridge — but vendor agreements, indemnification provisions, and BAA structures will determine ultimate allocation; courts have not yet resolved the institutional-vs.-vendor responsibility split .
  • Whether genetic data can be meaningfully de-identified under state genetic privacy statutes. Tempus AI's de-identification defense is untested at the class action level; the outcome will govern how every healthcare platform structures post-acquisition data integration and downstream licensing .
  • Whether the federal AI preemption framework displaces state enforcement before state litigation matures. With 177 state bills active and no federal statute enacted, the window for state-law claims is open; if preemption legislation advances, retroactive effect on pending suits is unsettled (→ White House Releases 2026 National AI Policy Framework on March 20).
  • What consent standard satisfies CIPA and the Federal Wiretap Act for ambient AI documentation. Boilerplate chart language stating patients "were advised" has been directly challenged as fabricated; whether any disclosure-at-intake mechanism satisfies all-party consent in California remains unresolved .
  • Whether AI chatbot impersonation of clinicians triggers consumer fraud liability independent of harm. The state AG enforcement action frames deception as the violation — not a specific patient injury — which, if sustained, sets a low threshold for future enforcement across clinical AI deployments (→ Tom Fox's Podcast Highlights 5 Key AI Healthcare Stories for Week Ending May 8, 2026).
  • Inventorship and patentability gaps in AI-generated drug candidates. With AI now embedding patentability analysis into target selection, premature filing on unvalidated candidates and inventorship attribution under EPC Article 81 and U.S. law remain open — and the USPTO's AI Search Automated Pilot does not resolve the underlying inventorship question (→ Pun et al. review integrates patent analysis into AI drug target selection frameworks[1][2]).
  • Whether the Second Circuit's no-fault ruling shifts enforcement leverage from insurers to state regulators. The New York Court of Appeals certification will determine whether anti-kickback violations can ever serve as a reimbursement-eligibility defense — with hundreds of pending cases in the balance (→ 2nd Cir. Vacates GEICO Win in NY No-Fault Kickback Case).

What to watch.

mail Subscribe to Health Care email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap