About
Privacy

Privacy

Tracking Privacy legal and regulatory developments.

42 entries in Litigator Tracker

DOJ export indictment triggers new probe of Super Micro’s controls

The Department of Justice unsealed an indictment in March 2026 charging three individuals tied to Super Micro Computer—two former employees and one contractor—with conspiring to violate U.S. export controls. The defendants allegedly diverted approximately $2.5 billion worth of servers containing advanced AI technology, including Nvidia chips, to China between 2024 and 2025. The indictment names co-founder and former senior vice president Yih‑Shyan "Wally" Liaw and a general manager from Super Micro's Taiwan office, who prosecutors say coordinated shipments through a third-party intermediary to circumvent export restrictions. Super Micro itself is not charged and has stated it was not accused of wrongdoing.

New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026

New York has enacted sweeping restrictions on synthetic performers in fashion and beauty advertising. Governor Kathy Hochul signed two bills into law on December 11, 2025—the Fashion Workers Act (S9832) and synthetic performer disclosure laws (S.8420-A/A.8887-B)—that take effect June 19, 2026. The laws require explicit consent from human models before their likenesses can be replicated digitally and mandate clear disclaimers whenever AI avatars appear in advertisements. Violations carry fines of $500 to $1,000. The New York Department of Labor will oversee model agency registration by June 2026. These rules arrive as brands including H&M plan to deploy digital twins for marketing, and virtual models like Shudu and Lil Miquela compete directly with human performers for contracts.

DOJ Intervenes in xAI Lawsuit to Block Colorado's AI Discrimination Law[1][2][3]

xAI filed suit on April 9, 2026, in U.S. District Court for the District of Colorado to block enforcement of Colorado's SB24-205, a comprehensive AI anti-discrimination law scheduled to take effect June 30, 2026. The statute requires developers and deployers of high-risk AI systems—those used in hiring, lending, and admissions decisions—to conduct impact assessments, make disclosures, and implement risk mitigation measures to prevent algorithmic discrimination. Two weeks later, on April 24, the U.S. Department of Justice intervened with its own complaint, arguing the law violates the Equal Protection Clause by compelling demographic adjustments through disparate-impact liability while simultaneously authorizing discrimination through exemptions for diversity initiatives. The court granted DOJ's intervention and issued a stay suspending enforcement pending resolution.

Fashion, Beauty, Wearable Brands Face Stricter 2026 Privacy Rules

Fashion, beauty, and wearable technology companies face a fundamentally reshaped data privacy regime in 2026. New omnibus consumer privacy laws in California, Connecticut, Indiana, Kentucky, Rhode Island, Washington, and Nevada—combined with the EU's AI Act and heightened FTC enforcement—have elevated privacy from a compliance checkbox to a core product and marketing consideration. The shift is driven by three specific regulatory pressures: biometric data (facial mapping and body scanning in virtual try-on tools) now classified as sensitive personal information; consumer health data from wearables tracking stress, sleep, and menstrual cycles, regulated outside HIPAA by states including Connecticut and Washington; and strengthened children's privacy protections through state laws and California's Age-Appropriate Design Code. Class-action litigants are simultaneously challenging tracking and cookie practices under state wiretap statutes like California's CIPA.

Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

Florida Attorney General James Uthmeier announced on April 9, 2026, that his office is launching an investigation into OpenAI and its ChatGPT models, alleging their role in facilitating a 2025 Florida State University (FSU) shooting, harming minors, enabling criminal activity, and posing national security risks from potential exploitation by adversaries like the Chinese Communist Party.[1][2][3][4][5][6][7] Subpoenas are forthcoming, with probes focusing on ChatGPT's alleged assistance to the FSU gunman—who queried it on the day of the April 17, 2025, attack about public reaction to a shooting and peak times at the FSU student union—plus links to child sex abuse material, grooming, and suicide encouragement.[1][3][5][6][7]

Florida court tosses DPPA parking citation lawsuit over lack of injury

A federal judge in the Southern District of Florida dismissed a class-action lawsuit under the Driver's Privacy Protection Act against Professional Parking Management Corporation, finding the plaintiff lacked Article III standing. The suit alleged the company used license plate readers in private parking lots, cross-referenced plates against state DMV records without consent, and mailed notices demanding $94.99—styled to resemble official citations—for unpaid parking charges. The plaintiff sought nationwide class certification and added Florida consumer-protection claims.

Federal Court Halts Colorado AI Law Enforcement Days Before June Deadline

A federal magistrate judge in Colorado issued a stay on April 27, 2026, freezing enforcement of the Colorado AI Act (SB24-205) just weeks before its scheduled June 30 effective date. The order prevents the Colorado Attorney General from initiating investigations or enforcement actions under the law, effectively halting one of the country's most comprehensive state AI regulations. Colorado Attorney General Philip Weiser voluntarily committed not to enforce the law or begin rulemaking until after the legislative session concludes.

Tesla Owners Sue Over Unfulfilled FSD Promises on HW3 Hardware

Tesla faces coordinated class-action litigation across multiple jurisdictions from owners of Hardware 3-equipped vehicles manufactured between 2016 and 2024. The plaintiffs allege that Tesla and Elon Musk made false representations that these vehicles would achieve full self-driving capability through software updates alone. A spring 2026 software release exposed Hardware 3's technical limitations, effectively excluding millions of owners from advanced autonomous features now reserved for newer Hardware 4 systems. The lead case, brought by retired attorney Tom LoSavio, centers on buyers who paid $8,000 to $12,000 for full self-driving capability that is now incompatible with their vehicles without costly hardware retrofits Tesla has not formally offered. Similar suits have been filed in Australia, the Netherlands, across Europe, and in California, where one action involves approximately 3,000 plaintiffs. Globally, the disputes affect roughly 4 million vehicles.

Articles Warn Clients Against Feeding Privileged Docs to Consumer AI

On May 8, 2026, The National Law Review and Varnum LLP published advisory articles warning clients against misusing consumer AI tools in legal matters. The pieces detail a specific risk: uploading privileged documents—draft agreements, legal memos, work product—into platforms like ChatGPT or Claude waives attorney-client privilege by exposing confidential information to third parties with no confidentiality obligations. The articles also caution that AI models tend to validate user assumptions rather than provide objective legal analysis, making them unreliable validators of legal advice.

Ninth Circuit Affirms Dismissal of Brita Filter Class Action on April 16, 2026[1][2][6]

On April 16, 2026, the Ninth Circuit affirmed dismissal of a consumer class action against Brita Products Company, holding that a reasonable consumer would not expect a $15 water filter to remove all hazardous contaminants. Plaintiff Nicholas Brown sued under California's Unfair Competition Law, False Advertising Law, and Consumers Legal Remedies Act, claiming Brita's labels for its Everyday Pitcher and Standard Filter misled buyers into believing the products eliminated contaminants like arsenic, chromium-6, PFOA, PFOS, nitrates, and radium to undetectable levels. The three-judge panel, led by Judge Kim McLane Wardlaw, rejected the claims after the Los Angeles district court had already dismissed without leave to amend in September 2024.

DOJ Joins xAI Lawsuit to Block Colorado AI Anti-Discrimination Law[1][2][7]

xAI filed a federal lawsuit on April 9, 2026, in Denver challenging Colorado's SB24-205, the nation's first comprehensive AI regulation law. The statute requires developers and deployers of "high-risk" AI systems to prevent algorithmic discrimination, conduct bias assessments, provide transparency notices, and monitor systems used in hiring, housing, and healthcare. The law takes effect June 30, 2026. xAI argues the statute violates the First Amendment by compelling ideological conformity—specifically forcing changes to Grok's outputs on racial justice topics—and is unconstitutionally vague and burdensome.

FTC and Congress intensify surveillance pricing crackdown amid state legislative wave

Federal regulators and lawmakers are moving aggressively against surveillance pricing—the practice of using consumer data to set individualized prices for identical products or services. In April 2026, FTC leadership told Congress that staff work on the issue continues, with the agency considering whether new disclosure requirements should apply to highly personalized, data-driven pricing. That same month, the House Oversight Committee launched a formal investigation, sending letters to major travel and platform companies demanding documentation on revenue management algorithms, consumer data practices, and testing protocols.

Data as Value – and Risk: Litigation Issues Facing Technology Providers and Their Customers

Organizations across all sectors are facing a wave of litigation over their data practices and AI systems. According to a Baker Donelson report, these legal challenges now extend well beyond technology companies and data brokers to affect organizations of every size that rely on data for operations, network security, regulatory compliance, and contractual obligations. The disputes involve civil liberties groups, workers' advocates, and privacy organizations pursuing claims centered on data privacy violations, algorithmic bias, unauthorized data use, AI system liability, and worker surveillance.

FTC Reports $2.1B Losses from Social Media Scams in 2025

The Federal Trade Commission released data on April 27, 2026, documenting $2.1 billion in reported losses from social media scams during 2025—making them the costliest fraud contact method on record. Nearly 30 percent of victims who lost money reported the fraud originated on social media, an eightfold increase from 2020. Facebook accounted for the largest share of losses, exceeding WhatsApp and Instagram combined and surpassing text or email scams individually.

SDNY Rules AI Tools Waive Privilege in US v. Heppner

A federal judge in Manhattan has ruled that a financial services executive waived attorney-client privilege and work product protection by using Anthropic's Claude AI tool without his lawyers' involvement. In United States v. Heppner, Judge Jed S. Rakoff ordered disclosure of 31 strategy documents the defendant generated after inputting case details derived from attorney communications. The court found that Claude, as a non-attorney third party, lacks fiduciary duties, and that Anthropic's privacy policy—which permits data use for training and third-party sharing—destroyed any reasonable expectation of confidentiality. This marks the first federal decision of its kind, rejecting the defendant's argument that later sharing the materials with counsel could retroactively restore privilege protection.

Seventh Circuit Rules BIPA Damages Cap Applies to Pending Cases

On April 1, 2026, the U.S. Court of Appeals for the Seventh Circuit issued a consolidated decision in Clay v. Union Pacific Railroad Co. holding that Illinois' August 2024 amendment to the Biometric Information Privacy Act applies retroactively to all pending cases. The amendment, enacted as SB 2979, caps statutory damages at one recovery per person per biometric collection method—eliminating the "per-scan" liability model that had exposed defendants to exponentially higher exposure. The court reversed three unanimous district court decisions from the Northern District of Illinois that had ruled the amendment applied only to future claims.

CalPrivacy Seeks Comments on CCPA Employee Data Notices by May 20

The California Privacy Protection Agency opened a public comment period on April 20, 2026, to solicit input on potential updates to California Consumer Privacy Act regulations governing privacy notices, disclosures, and employee data handling. The agency is examining whether current rules—which require businesses to provide privacy policies, notices at collection, and rights notifications for employees' personal information—require revision or new provisions specific to employment contexts. Comments are due by 5:00 p.m. PT on May 20, 2026, submitted via email to regulations@cppa.ca.gov or by mail. The agency has posed specific questions on consumer clarity, effective notice examples, worker expectations for data collection and use, and employer compliance challenges.

Federal Court Rules AI Chatbot Communications Not Protected by Attorney-Client Privilege

On February 17, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York ruled in United States v. Heppner that a criminal defendant's communications with Anthropic's Claude AI platform were not protected by attorney-client privilege or work product doctrine. The defendant had used the public chatbot to create analysis documents after receiving a grand jury subpoena, then claimed privilege when sharing them with counsel. The court ordered disclosure to the government.

Federal Court Dismisses Paramount Privacy Lawsuit Over Concrete Injury Standard

The U.S. District Court for the Central District of California dismissed all eight counts in a privacy lawsuit against Paramount Skydance Corporation on April 20, 2026, finding that plaintiffs lacked legal standing. The court ruled plaintiffs failed to demonstrate an injury aligned with harms traditionally recognized under American law. The complaint had alleged violations of the Video Privacy Protection Act, Electronic Communications Privacy Act, California Invasion of Privacy Act, common law invasion of privacy, California constitutional privacy rights, negligence, breach of implied contract, and unjust enrichment.

ACC Urges CA Appeals Court to Rule CIPA Doesn't Cover Website Cookies, Pixels

The Association of Corporate Counsel filed an amicus brief on April 8, 2026, urging the California Court of Appeal to clarify that the California Invasion of Privacy Act does not extend to routine website technologies like cookies, tracking pixels, and analytics metadata. ACC argues that plaintiffs are mischaracterizing these tools as "pen registers" or "trap and trace devices"—law enforcement surveillance mechanisms that require court orders under CIPA—when they serve ordinary business functions. The brief, authored by Fisher Phillips attorneys Usama Kahf, Darcey Groden, and David Shannon, contends that applying CIPA's warrant requirement to standard web analytics creates untenable compliance burdens for businesses nationwide.

Sanders and AOC call for federal AI moratorium amid regulatory debate

Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez have introduced a proposal for a federal moratorium on AI development and data centers, characterizing artificial intelligence as an "imminent existential threat." The call for restrictions has crystallized a fundamental policy divide: whether AI requires aggressive regulatory intervention or a risk-based approach that permits innovation while addressing specific harms.

Second Circuit Affirms Dismissal of VPPA Class Action Against NBCUniversal[1][3]

On April 23, 2026, the U.S. Court of Appeals for the Second Circuit affirmed a lower court's dismissal of a class action alleging violations of the Video Privacy Protection Act. Plaintiff Sherhonda Golden sued NBCUniversal Media over Today.com's use of a Facebook Pixel—tracking code that transmitted her Facebook ID and video-viewing history to Meta without her consent. The Second Circuit ruled that the transmitted data did not constitute "personally identifiable information" under the VPPA because an ordinary person could not readily connect it to her identity and viewing habits without technical expertise.

7th Circuit Rules 2024 BIPA Damages Amendment Applies Retroactively to Pending Cases

On April 1, 2026, the U.S. Court of Appeals for the Seventh Circuit unanimously held that Illinois' August 2024 amendment to the Biometric Information Privacy Act applies retroactively to all pending cases. In Clay v. Union Pacific Railroad Co. (consolidated with Willis and Gregg), the court classified the amendment as procedural rather than substantive, allowing it to govern cases filed before its effective date. The amendment fundamentally restructures BIPA damages by capping recovery at $1,000 per violation for negligent violations and $5,000 for intentional ones—eliminating the "per-scan" theory that previously allowed plaintiffs to multiply damages across each biometric collection or transmission event.

Ninth Circuit Revives Target Thread Count Class Action[1][7]

On April 17, the Ninth Circuit reversed a district court's dismissal of a putative class action alleging Target sold 100% cotton bedsheets with fraudulent thread counts. Plaintiff Alexander Panelli claimed he purchased sheets labeled 800-thread-count in September 2023 that tested at only 288 threads per inch. He asserted the label was literally false under California consumer protection law, since 600 thread count is the physical maximum for pure cotton. The district court had dismissed the case, reasoning no reasonable consumer would believe an impossible claim. Target argued the thread count measurement itself was ambiguous and therefore not deceptive as a matter of law.

Capital One’s recent $425M settlement could mean money in your pocket this summer

A federal judge in the Eastern District of Virginia approved a $425 million class action settlement against Capital One on April 20, 2026, resolving claims that the bank deceptively marketed its legacy 360 Savings accounts while paying substantially lower interest rates than its newer 360 Performance Savings product launched in 2019. Eligible account holders—those who maintained a 360 Savings account from September 18, 2019, through June 16, 2025—will receive automatic restitution calculated based on lost interest earnings. Payments, distributed via check or electronic transfer, are expected around July 21, 2026, after deduction of legal fees.

Seven Families Sue OpenAI Over Suspect's ChatGPT Use in 2025 FSU Shooting

Seven families of victims from a 2025 Florida State University mass shooting have filed lawsuits against OpenAI, claiming the company negligently failed to alert law enforcement about the suspect's extensive ChatGPT interactions. The suits allege that Phoenix Ikner, the accused gunman now awaiting trial, maintained constant communication with the chatbot and may have received guidance on executing the attack. The families are pursuing negligence claims, arguing OpenAI breached its duty of care by failing to flag foreseeable harm despite the chatbot's design and the nature of the interactions.

Surge in "Junk Fee" Class Actions Targets Hidden Pricing Practices

The Federal Trade Commission's Rule on Unfair or Deceptive Fees took effect on May 12, 2025, requiring companies to disclose total prices upfront for live-event tickets and short-term lodging, including all mandatory fees. The rule has accelerated an already-steep rise in junk fee litigation across ticketing, hospitality, banking, and rental industries. Class actions and mass arbitrations alleging "drip pricing"—the practice of hiding or misrepresenting fees until late in transactions—have spiked since 2022, with potential exposures exceeding $10 million per case. California's SB 478, effective July 1, 2024, compounds liability by imposing penalties up to $2,500 per violation. Plaintiffs' firms are pursuing coordinated mass arbitrations against ticket sellers, banks, landlords, and online retailers, often bypassing class-action waivers through arbitration clauses.

Workers File 7 Class-Action Lawsuits Against Mercor Over Data Breach Exposure[1][2]

Mercor, a $10 billion San Francisco AI startup that supplies training data to OpenAI, Anthropic, and Meta, is defending itself against at least seven class-action lawsuits filed in recent weeks. The suits stem from a data breach last month that exposed contractor information including recorded job interviews, facial biometric data, computer screenshots, and background checks. Plaintiffs allege Mercor violated federal privacy regulations by collecting extensive data through monitoring software like Insightful, sharing it with AI partners, and using interviews and proprietary materials to train models without adequate consent or disclosure.

Meloni Posts AI-Generated Nude to Warn of Deepfake Danger

On May 5, 2026, Italian Prime Minister Giorgia Meloni reposted an AI-generated image of herself in lingerie across her social media accounts—deliberately amplifying a fake that had circulated online. Rather than ignore it, she republished the image herself with a warning about synthetic media dangers, joking that the creators had "improved" her appearance. The move was framed as a public service announcement demonstrating how convincingly AI can fabricate imagery.

Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.

Unauthorized AI tools have become endemic in corporate environments, with nearly half of all workers admitting to using unapproved platforms like ChatGPT and Claude at work. A 2025 Gartner survey found that 69% of organizations either suspect or have confirmed that employees are using prohibited generative AI tools, while research indicates the figure reaches 98% when accounting for all unsanctioned applications. The problem spans organizational hierarchies: 93% of executives report using unauthorized AI, with 69% of C-suite members and 66% of senior vice presidents unconcerned about the practice. Gen Z employees lead adoption at 85%, and notably, 68% of workers using ChatGPT at work deliberately conceal it from employers.

GrayRobinson Hit with Additional Lawsuits Over 2025 Data Breach

GrayRobinson, P.A., a Florida-based law and lobbying firm, disclosed a cybersecurity breach affecting 65,113 individuals. Unauthorized actors accessed the firm's network between March 5 and March 24, 2025, potentially exposing names, Social Security numbers, dates of birth, driver's licenses, financial account information, and protected health information. The firm detected the intrusion on March 24, secured its systems, notified law enforcement, and engaged external cybersecurity experts. The forensic investigation concluded April 13, 2026. Notifications to affected individuals began April 24, 2026, with regulatory reports filed to state attorneys general including California and Maine. GrayRobinson offered complimentary Experian IdentityWorks credit monitoring and reported no evidence of actual misuse.

If you see this iCloud message on your iPhone, don’t click it—it’s a scam

A widespread phishing campaign is targeting Apple users globally with fraudulent emails and text messages impersonating iCloud notifications. The scams warn recipients that their cloud storage is full and direct them to click links to upgrade or manage their accounts. Those links lead to convincing fake websites designed to harvest Apple ID credentials, credit card information, and other sensitive data—sometimes triggering malware downloads. Apple has confirmed it sends legitimate storage alerts only through device settings and official system notifications, never through unsolicited emails or texts requesting passwords or payment information.

GrayRobinson Faces Class Action Over 2025 Data Breach Negligence

GrayRobinson, P.A., a Florida-based law firm, disclosed a data breach affecting 65,113 individuals between March 5 and March 24, 2025. Unauthorized actors accessed the firm's network during that period, potentially exposing names, Social Security numbers, and other sensitive personal information. The firm detected the intrusion on March 24, secured its systems, notified law enforcement, and retained third-party investigators. A forensic review completed in April 2026 confirmed the exposure, and GrayRobinson sent breach notices on April 24, 2026. The firm is offering two years of free identity monitoring through Experian. No evidence of actual misuse has emerged.

Washington Gov. Ferguson Signs HB 2225 Requiring AI Companion Chatbot Disclosures

Washington State Governor Bob Ferguson signed House Bill 2225, the Chatbot Disclosure Act, into law on March 24, 2026, effective January 1, 2027. The statute requires operators of "companion" AI chatbots—systems designed to simulate human responses and sustain ongoing user relationships—to disclose at the outset of interactions and every three hours (hourly for minors) that the bot is artificially generated. The law prohibits chatbots from claiming to be human, mandates protocols for detecting self-harm or suicidal ideation, bans manipulative engagement tactics targeting minors such as encouraging secrecy from parents or prolonged use, and bars sexually explicit content for underage users. Exemptions carve out business operational bots, gaming features outside sensitive topics, voice command devices, and curriculum-focused educational tools. Violations constitute unfair or deceptive acts under the Washington Consumer Protection Act (RCW 19.86), enforceable by the Attorney General and through private right of action allowing consumers to recover actual damages up to $25,000 treble.

Crickle Daisy Loungewear Faces TCPA Quiet Hours Class Action Lawsuit

Crickle Daisy, a loungewear company, faces a class action lawsuit alleging violations of the Telephone Consumer Protection Act's quiet hours provision. The plaintiff claims the company sent marketing texts outside the permitted window of 8:00 a.m. to 9:00 p.m. in recipients' local time zones, violating 47 U.S.C. § 227(c). The suit seeks damages on behalf of a nationwide class of consumers who received such messages.

College Student Sues Meete Dating App for Repurposing Her TikTok Video in Ads

A University of Tennessee nursing student has sued Meete, a dating app operated by British Virgin Islands–based Quantum Communications, alleging the company stole her public TikTok graduation video and weaponized it for targeted advertising. Elena Lunglhofer claims Meete overlaid the video with app graphics, added a synthetic voiceover in which she appeared to solicit men for casual encounters, and used geotargeting to serve the ad on Snapchat to users near her campus, including residents of her dormitory. She discovered the misuse when a male student showed her screenshots of the ad. Attorney Abe Pafford filed suit on April 28, 2026, in Tennessee state court, asserting claims for misappropriation of likeness, right of publicity violations, and emotional distress.

DFPI Wins First CCFPL Administrative Ruling Against Unlicensed Debt Collector

The California Department of Financial Protection and Innovation announced its first administrative enforcement win under the state's consumer financial protection regime. An administrative law judge upheld a desist and refrain order against a debt collection and credit repair company operating without a California debt collection license, requiring the firm to cease violations, rescind consumer agreements, issue refunds, and pay $150,000. The violations spanned the Rosenthal Fair Debt Collection Practices Act, the Debt Collection Licensing Act, and the federal Fair Debt Collection Practices Act, centered on deceptive payday loan debt tactics.

CT AG Tong Issues Feb. 25 Memo Applying Existing Laws to AI

Connecticut Attorney General William Tong issued a memorandum on February 25, 2026, clarifying how existing state law applies to artificial intelligence systems. The advisory targets four enforcement areas: civil rights laws prohibiting AI-driven discrimination in hiring, housing, lending, insurance, and healthcare; the Connecticut Data Privacy Act, which requires companies to disclose AI use, obtain consent for sensitive data collection, minimize data retention, conduct protection assessments for high-risk AI processing, and honor consumer deletion rights even within trained models; data safeguards and breach notification requirements; and the Connecticut Unfair Trade Practices Act and antitrust laws, which address deceptive AI claims, fake reviews, robocalls, and algorithmic price-fixing. The memorandum applies broadly to any business deploying AI in consequential decisions and specifically references harms including AI-generated nonconsensual imagery on platforms like xAI's Grok.

AI-Powered Wire Fraud Surges as Deepfakes and Social Engineering Overwhelm Traditional Defenses

AI-powered fraud has emerged as the dominant financial crime threat in 2026, with cybercriminals using deepfake technology and generative AI to impersonate executives and trusted contacts in wire transfer schemes. Business email compromise attacks have surged 1,760% since generative AI became widely available. A single deepfake video call cost engineering firm Arup $25.6 million. These attacks are particularly dangerous because victims remain genuinely authenticated and security controls register as fully operational, making detection extraordinarily difficult.

Clio Report: 71% of Small Law Firms Use AI, But Revenue Growth Lags Larger Competitors

Clio's 2026 Legal Trends report exposes a widening performance gap between small law firms and their larger competitors despite widespread AI adoption. While 71% of solo practitioners and 75% of small firms now use AI tools, fewer than 33% have increased revenues—a sharp contrast to enterprise firms where nearly 60% report revenue growth tied to AI implementation.

2nd Cir. Vacates GEICO Win in NY No-Fault Kickback Case

On March 10, 2026, the U.S. Court of Appeals for the Second Circuit vacated a district court victory for GEICO in a dispute over no-fault auto insurance reimbursements. The panel reversed summary judgment against Igor Mayzenberg and his three acupuncture clinics, holding that a healthcare provider's violation of New York anti-kickback laws does not automatically disqualify them from no-fault reimbursement eligibility under state regulation 11 N.Y.C.R.R. § 65-3.16(a)(12). GEICO had sued to recover millions in payments to Mayzenberg's clinics, alleging kickbacks paid for patient referrals constituted licensing violations that enabled fraud and triggered RICO liability. The Eastern District of New York had granted GEICO summary judgment in 2022, but the Second Circuit panel reversed on the eligibility interpretation and certified the core legal question to the New York Court of Appeals in October 2025.

LawSnap Briefing Updated May 11, 2026

State of play.

  • State enforcement is the dominant vector. The Florida AG has launched a formal investigation into OpenAI and ChatGPT citing national security concerns, and California's Privacy Protection Agency has opened rulemaking on CCPA employee data obligations — both moving through existing statutory authority without waiting for federal action (→ Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting, CalPrivacy Seeks Comments on CCPA Employee Data Notices by May 20).
  • Biometric and health data from consumer tech products are the sharpest compliance edge. Omnibus state privacy laws in California, Connecticut, Indiana, Kentucky, Rhode Island, Washington, and Nevada now classify facial-mapping, body-scan, and wearable health data as sensitive personal information, with state AGs actively investigating tracking practices in the fashion and beauty sectors (→ Fashion, Beauty, Wearable Brands Face Stricter 2026 Privacy Rules).
  • Shadow AI inside the enterprise is a live data-breach and regulatory exposure. A 2025 Gartner survey found 69% of organizations have confirmed or suspect prohibited generative AI tool use; a third of employees admit sharing enterprise research or datasets through unsanctioned platforms (→ Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.).
  • Standing doctrine is tightening in federal privacy litigation. The Southern District of Florida dismissed a DPPA class action with prejudice for lack of concrete injury, signaling that data-misuse alone — without tangible financial harm — will not clear Article III in at least some circuits (→ Florida court tosses DPPA parking citation lawsuit over lack of injury).
  • For counsel advising technology companies, consumer brands, or employers, the practical baseline is a multi-front exposure: state AG enforcement through existing law, an accelerating patchwork of sector-specific biometric and health-data rules, and an internal AI-governance gap that creates breach and regulatory risk before any incident occurs.

Where things stand.

Latest developments.

Active questions and open splits.

  • How far does concrete-injury standing doctrine extend in federal privacy suits? The S.D. Florida DPPA dismissal requires tangible harm beyond data misuse; the Maryland Carfax case is surviving — the split turns on data-commercialization model, but no circuit has resolved the broader question of when statutory privacy violations alone satisfy Article III (→ Florida court tosses DPPA parking citation lawsuit over lack of injury).
  • Will federal preemption displace state AI and synthetic-performer consent regimes? The December 2025 White House EO seeks federal harmonization of conflicting state AI laws; New York and California have enacted consent mandates that may collide with any federal preemption framework — the interaction is unresolved before New York's June 19 effective date (→ New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026).
  • What constitutes an adequate CCPA employee privacy notice? The CalPrivacy Agency's rulemaking is examining whether current rules require employment-specific revisions; until final rules issue, employers face uncertainty about what notice architecture satisfies the statute (→ CalPrivacy Seeks Comments on CCPA Employee Data Notices by May 20).
  • Where is the line between lawful dynamic pricing and actionable surveillance pricing? Regulators are drawing a distinction between market-condition-based pricing and consumer-data-driven individualized pricing, but no court has defined the boundary; companies using revenue management algorithms face simultaneous FTC investigation and multi-state legislative exposure (→ FTC and Congress intensify surveillance pricing crackdown amid state legislative wave).
  • What governance framework satisfies the duty to prevent shadow AI data exposure? No regulator has issued guidance on what internal controls are required; HIPAA, financial-services, and state privacy regulators could each assert jurisdiction over breaches originating from unsanctioned employee AI use, and the allocation of liability between employer and tool provider is untested (→ Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.).
  • How will courts allocate liability for algorithmic harms across the data supply chain? Early litigation is establishing precedents on data ownership, AI procurement obligations, and corporate accountability for algorithmic bias and worker surveillance — the rules are being written in real time, with no settled framework (→ Data as Value – and Risk: Litigation Issues Facing Technology Providers and Their Customers).

What to watch.

  • CalPrivacy Agency final rules on CCPA employee data notices — whatever issues from this rulemaking will become the compliance floor for all California employers and a template other states will reference.
  • New York Fashion Workers Act and synthetic performer disclosure law enforcement posture after the June 19, 2026 effective date — first enforcement actions will define what "explicit consent" and "clear disclaimer" require in practice.
  • EU AI Act labeling requirements effective August 2026 — the penalty structure (up to €15 million) will drive multinational compliance decisions that affect U.S. operations.
  • FTC Section 6(b) surveillance pricing study output and any resulting rulemaking — the agency's framing of the dynamic-pricing versus consumer-data-pricing distinction will set the enforcement standard nationally.
  • Whether additional state AGs follow Florida's template of investigating AI companies through existing consumer protection and national security authority — the Florida OpenAI probe is the leading indicator of a broader enforcement pattern.
  • Resolution of the DPPA circuit split on concrete injury — if the Maryland Carfax case produces a ruling inconsistent with the S.D. Florida dismissal, a circuit conflict on statutory privacy standing becomes a cert-worthy question.

mail Subscribe to Privacy email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap