About
Employment Law

Employment Law

Tracking Employment Law legal and regulatory developments.

32 entries in Litigator Tracker

DOJ export indictment triggers new probe of Super Micro’s controls

The Department of Justice unsealed an indictment in March 2026 charging three individuals tied to Super Micro Computer—two former employees and one contractor—with conspiring to violate U.S. export controls. The defendants allegedly diverted approximately $2.5 billion worth of servers containing advanced AI technology, including Nvidia chips, to China between 2024 and 2025. The indictment names co-founder and former senior vice president Yih‑Shyan "Wally" Liaw and a general manager from Super Micro's Taiwan office, who prosecutors say coordinated shipments through a third-party intermediary to circumvent export restrictions. Super Micro itself is not charged and has stated it was not accused of wrongdoing.

New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026

New York has enacted sweeping restrictions on synthetic performers in fashion and beauty advertising. Governor Kathy Hochul signed two bills into law on December 11, 2025—the Fashion Workers Act (S9832) and synthetic performer disclosure laws (S.8420-A/A.8887-B)—that take effect June 19, 2026. The laws require explicit consent from human models before their likenesses can be replicated digitally and mandate clear disclaimers whenever AI avatars appear in advertisements. Violations carry fines of $500 to $1,000. The New York Department of Labor will oversee model agency registration by June 2026. These rules arrive as brands including H&M plan to deploy digital twins for marketing, and virtual models like Shudu and Lil Miquela compete directly with human performers for contracts.

DOJ Intervenes in xAI Lawsuit to Block Colorado's AI Discrimination Law[1][2][3]

xAI filed suit on April 9, 2026, in U.S. District Court for the District of Colorado to block enforcement of Colorado's SB24-205, a comprehensive AI anti-discrimination law scheduled to take effect June 30, 2026. The statute requires developers and deployers of high-risk AI systems—those used in hiring, lending, and admissions decisions—to conduct impact assessments, make disclosures, and implement risk mitigation measures to prevent algorithmic discrimination. Two weeks later, on April 24, the U.S. Department of Justice intervened with its own complaint, arguing the law violates the Equal Protection Clause by compelling demographic adjustments through disparate-impact liability while simultaneously authorizing discrimination through exemptions for diversity initiatives. The court granted DOJ's intervention and issued a stay suspending enforcement pending resolution.

Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

Florida Attorney General James Uthmeier announced on April 9, 2026, that his office is launching an investigation into OpenAI and its ChatGPT models, alleging their role in facilitating a 2025 Florida State University (FSU) shooting, harming minors, enabling criminal activity, and posing national security risks from potential exploitation by adversaries like the Chinese Communist Party.[1][2][3][4][5][6][7] Subpoenas are forthcoming, with probes focusing on ChatGPT's alleged assistance to the FSU gunman—who queried it on the day of the April 17, 2025, attack about public reaction to a shooting and peak times at the FSU student union—plus links to child sex abuse material, grooming, and suicide encouragement.[1][3][5][6][7]

DOJ Joins xAI Lawsuit to Block Colorado AI Anti-Discrimination Law[1][2][7]

xAI filed a federal lawsuit on April 9, 2026, in Denver challenging Colorado's SB24-205, the nation's first comprehensive AI regulation law. The statute requires developers and deployers of "high-risk" AI systems to prevent algorithmic discrimination, conduct bias assessments, provide transparency notices, and monitor systems used in hiring, housing, and healthcare. The law takes effect June 30, 2026. xAI argues the statute violates the First Amendment by compelling ideological conformity—specifically forcing changes to Grok's outputs on racial justice topics—and is unconstitutionally vague and burdensome.

Data as Value – and Risk: Litigation Issues Facing Technology Providers and Their Customers

Organizations across all sectors are facing a wave of litigation over their data practices and AI systems. According to a Baker Donelson report, these legal challenges now extend well beyond technology companies and data brokers to affect organizations of every size that rely on data for operations, network security, regulatory compliance, and contractual obligations. The disputes involve civil liberties groups, workers' advocates, and privacy organizations pursuing claims centered on data privacy violations, algorithmic bias, unauthorized data use, AI system liability, and worker surveillance.

Seventh Circuit Rules BIPA Damages Cap Applies to Pending Cases

On April 1, 2026, the U.S. Court of Appeals for the Seventh Circuit issued a consolidated decision in Clay v. Union Pacific Railroad Co. holding that Illinois' August 2024 amendment to the Biometric Information Privacy Act applies retroactively to all pending cases. The amendment, enacted as SB 2979, caps statutory damages at one recovery per person per biometric collection method—eliminating the "per-scan" liability model that had exposed defendants to exponentially higher exposure. The court reversed three unanimous district court decisions from the Northern District of Illinois that had ruled the amendment applied only to future claims.

Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models

Palantir CEO Alex Karp has publicly attacked low-quality AI outputs as "slop," positioning the company's AI Platform (AIP) as a secure, enterprise-grade alternative built on its Foundry data infrastructure. The criticism comes as Palantir faces investor concerns that it may lose market share to cheaper, faster standalone large language models from OpenAI and Anthropic—competitors that don't require Palantir's ontology-based data backbone.

CalPrivacy Seeks Comments on CCPA Employee Data Notices by May 20

The California Privacy Protection Agency opened a public comment period on April 20, 2026, to solicit input on potential updates to California Consumer Privacy Act regulations governing privacy notices, disclosures, and employee data handling. The agency is examining whether current rules—which require businesses to provide privacy policies, notices at collection, and rights notifications for employees' personal information—require revision or new provisions specific to employment contexts. Comments are due by 5:00 p.m. PT on May 20, 2026, submitted via email to regulations@cppa.ca.gov or by mail. The agency has posed specific questions on consumer clarity, effective notice examples, worker expectations for data collection and use, and employer compliance challenges.

Sanders and AOC call for federal AI moratorium amid regulatory debate

Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez have introduced a proposal for a federal moratorium on AI development and data centers, characterizing artificial intelligence as an "imminent existential threat." The call for restrictions has crystallized a fundamental policy divide: whether AI requires aggressive regulatory intervention or a risk-based approach that permits innovation while addressing specific harms.

EDVA Denies Alarm.com's Motion to Dismiss SkyBell Trade Secrets Suit

The Eastern District of Virginia has denied Alarm.com's motion to dismiss a trade secrets lawsuit brought by former partner SkyBell Technologies. SkyBell accused Alarm.com of misappropriating video doorbell technology and poaching employees after the companies' partnership ended in late 2022. Alarm.com had argued the three-year statute of limitations under the Defend Trade Secrets Act and Virginia Uniform Trade Secrets Act barred SkyBell's July 2025 complaint. Judge Rossie D. Alston Jr. rejected that defense, holding that SkyBell could not have discovered the alleged misappropriation earlier because a 2015 Development and Integration Agreement between the parties explicitly prohibited reverse engineering and required confidentiality—contractual restrictions that remained in force until the agreement terminated in November 2022.

7th Circuit Rules 2024 BIPA Damages Amendment Applies Retroactively to Pending Cases

On April 1, 2026, the U.S. Court of Appeals for the Seventh Circuit unanimously held that Illinois' August 2024 amendment to the Biometric Information Privacy Act applies retroactively to all pending cases. In Clay v. Union Pacific Railroad Co. (consolidated with Willis and Gregg), the court classified the amendment as procedural rather than substantive, allowing it to govern cases filed before its effective date. The amendment fundamentally restructures BIPA damages by capping recovery at $1,000 per violation for negligent violations and $5,000 for intentional ones—eliminating the "per-scan" theory that previously allowed plaintiffs to multiply damages across each biometric collection or transmission event.

Indiana Judge Rules AI Cannot Substitute for Attorney Review in Discovery

On April 14, 2026, Magistrate Judge Tim A. Baker of the U.S. District Court for the Southern District of Indiana issued an order in White v. Walmart (Case No. 25-cv-01120) sanctioning plaintiff's counsel for relying exclusively on artificial intelligence to identify deficiencies in the defendant's discovery responses. The court held that while AI can serve as a useful tool, it cannot substitute for attorney judgment and does not satisfy the Federal Rules of Civil Procedure's requirement that parties meet and confer in good faith before escalating discovery disputes.

Ex-Wachtell lawyer in insider trading ring later joined investment bank

The Department of Justice unsealed charges Wednesday against 30 individuals in a decade-long insider trading scheme centered on nonpublic information from major M&A transactions. Nicolo Nourafchan, a Yale Law graduate who worked at Sidley Austin, Latham & Watkins, Cleary Gottlieb, and Goodwin Procter, led the conspiracy. Participants traded on confidential deal details including Occidental Petroleum's $55 billion acquisition of Anadarko in 2019 and Burger King's $11 billion takeover of Tim Hortons in 2014. The scheme leveraged Nourafchan's recruitment of law school classmates positioned at major firms with M&A access. A former Wachtell Lipton lawyer and Yale classmate of Nourafchan has been identified as a co-conspirator; he later worked at an investment bank. The Southern District of New York is prosecuting the criminal case while the SEC pursues parallel civil charges.

JPMorgan Banker Sues Executive Over Sexual Assault Claims; Bank Denies Allegations

Chirayu Rana, a 35-year-old former JPMorgan investment banker, has filed a civil lawsuit against Lorna Hajdini, a senior executive director in the bank's Leveraged Finance Division, alleging sexual assault, drugging with Viagra, racial harassment, and workplace coercion. The case, initially filed anonymously in early 2025, became public in May 2026 when Rana identified himself and submitted detailed court filings. Rana is seeking over $20 million in damages after rejecting JPMorgan's $1 million settlement offer. He is represented by Daniel Kaiser, a prominent New York attorney known for representing accusers in the Jeffrey Epstein matter.

Culture is where AI strategy goes to die. Here’s how to jump-start an AI-ready culture in 90 days

A 90-day cultural transformation framework has emerged as an alternative to mass workforce replacement during AI adoption, directly responding to IgniteTech CEO Eric Vaughan's controversial 2025 decision to terminate approximately 80% of his staff after employees resisted AI tools despite substantial training investment. Organizational researchers and business leaders have synthesized a three-phase approach—Diagnose, Rewire, Embed—designed to build AI-ready cultures without layoffs. The framework rests on a core finding: cultural misalignment, not technological incapacity, drives AI transformation failures. Writer's 2025 enterprise AI adoption report documents that nearly one-third of employees actively sabotage AI rollouts, with resistance particularly pronounced among technical staff and Gen Z workers (41% report active sabotage).

Workers File 7 Class-Action Lawsuits Against Mercor Over Data Breach Exposure[1][2]

Mercor, a $10 billion San Francisco AI startup that supplies training data to OpenAI, Anthropic, and Meta, is defending itself against at least seven class-action lawsuits filed in recent weeks. The suits stem from a data breach last month that exposed contractor information including recorded job interviews, facial biometric data, computer screenshots, and background checks. Plaintiffs allege Mercor violated federal privacy regulations by collecting extensive data through monitoring software like Insightful, sharing it with AI partners, and using interviews and proprietary materials to train models without adequate consent or disclosure.

Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.

Unauthorized AI tools have become endemic in corporate environments, with nearly half of all workers admitting to using unapproved platforms like ChatGPT and Claude at work. A 2025 Gartner survey found that 69% of organizations either suspect or have confirmed that employees are using prohibited generative AI tools, while research indicates the figure reaches 98% when accounting for all unsanctioned applications. The problem spans organizational hierarchies: 93% of executives report using unauthorized AI, with 69% of C-suite members and 66% of senior vice presidents unconcerned about the practice. Gen Z employees lead adoption at 85%, and notably, 68% of workers using ChatGPT at work deliberately conceal it from employers.

DOJ's Lead Prosecutor on Law Firm Appeals to Exit Role End of May

Abhishek Kambli, the Deputy Associate Attorney General who led the Trump administration's defense of executive orders targeting four major law firms, announced his departure from the DOJ effective end of May 2026. Kambli joined the department in February 2025 and oversaw litigation defending orders that barred Perkins Coie, Jenner & Block, WilmerHale, and Susman Godfrey from federal contracts, buildings, and employment based on their representation of administration opponents. All four firms challenged the orders in federal court; all won injunctions on constitutional grounds. The DOJ appealed to the D.C. Circuit, then abruptly moved to dismiss those appeals on March 2, 2026—only to reverse course the next day when Kambli filed to withdraw the dismissal motion.

Nonprofit Volunteer Sues DLA Piper for Malicious Prosecution in Chipotle-Referred Fraud Case

Jeremy Whiteley, a former nonprofit volunteer board member, filed a malicious-prosecution complaint against DLA Piper on May 8, 2026, in California state court. Whiteley alleges the firm aggressively pursued a Computer Fraud and Abuse Act lawsuit against him at the behest of Chipotle's then-general counsel, who referred the matter. The underlying CFAA case, which Whiteley successfully defended, allegedly lacked merit. Whiteley seeks damages of $1.8 million in defense costs incurred during the litigation.

CT AG Tong Issues Feb. 25 Memo Applying Existing Laws to AI

Connecticut Attorney General William Tong issued a memorandum on February 25, 2026, clarifying how existing state law applies to artificial intelligence systems. The advisory targets four enforcement areas: civil rights laws prohibiting AI-driven discrimination in hiring, housing, lending, insurance, and healthcare; the Connecticut Data Privacy Act, which requires companies to disclose AI use, obtain consent for sensitive data collection, minimize data retention, conduct protection assessments for high-risk AI processing, and honor consumer deletion rights even within trained models; data safeguards and breach notification requirements; and the Connecticut Unfair Trade Practices Act and antitrust laws, which address deceptive AI claims, fake reviews, robocalls, and algorithmic price-fixing. The memorandum applies broadly to any business deploying AI in consequential decisions and specifically references harms including AI-generated nonconsensual imagery on platforms like xAI's Grok.

AI Drives 85K Tech Layoffs in 2026 Despite Overall Job Cut Decline

Technology companies eliminated over 85,000 jobs in the first four months of 2026 explicitly attributed to AI adoption, marking a sharp acceleration from 2025's 55,000 AI-linked cuts. Amazon, Accenture, Atlassian, Coinbase, Snap, Block, and Oracle announced reductions ranging from 10 to 30 percent of their workforces, with executives citing automation, operational efficiency, and repositioning for an "AI era." The cuts span entry-level through mid-career roles in programming, customer service, and administrative functions. WARN notices and SEC filings document the reductions, though no federal legislation or agency action has been triggered.

Greenhouse Survey Reveals 64% of Job Seekers Have AI Interviews, 38% Drop Out

Nearly two-thirds of U.S. job seekers have been interviewed by AI during hiring, according to a new report from Greenhouse, a hiring platform that surveyed approximately 1,200 workers. The figure represents a 13 percentage point jump from six months prior. The survey revealed substantial candidate attrition: 38% abandoned hiring processes involving AI interviews, while another 12% said they would do so if given the option.

2nd Cir. Vacates GEICO Win in NY No-Fault Kickback Case

On March 10, 2026, the U.S. Court of Appeals for the Second Circuit vacated a district court victory for GEICO in a dispute over no-fault auto insurance reimbursements. The panel reversed summary judgment against Igor Mayzenberg and his three acupuncture clinics, holding that a healthcare provider's violation of New York anti-kickback laws does not automatically disqualify them from no-fault reimbursement eligibility under state regulation 11 N.Y.C.R.R. § 65-3.16(a)(12). GEICO had sued to recover millions in payments to Mayzenberg's clinics, alleging kickbacks paid for patient referrals constituted licensing violations that enabled fraud and triggered RICO liability. The Eastern District of New York had granted GEICO summary judgment in 2022, but the Second Circuit panel reversed on the eligibility interpretation and certified the core legal question to the New York Court of Appeals in October 2025.

OpenAI CEO Sam Altman Faces Mounting Pressure Ahead of IPO

OpenAI and CEO Sam Altman face mounting pressure as the company prepares for a potential 2026 public offering. The intensifying scrutiny spans multiple fronts: internal competitive tensions with Anthropic, activist opposition, and legal proceedings. Most notably, Chief Revenue Officer Denise Dresser circulated a memo challenging Anthropic's financial claims, alleging inflated revenue through accounting methods and strategic errors in compute acquisition. Anthropic currently reports $30 billion in annualized revenue compared to OpenAI's last reported $25 billion. Separately, an activist group called Stop AI has conducted ongoing protests at OpenAI headquarters, with some members facing criminal trial for blocking the building. Altman was served a subpoena onstage in San Francisco in late April while speaking with basketball coach Steve Kerr, requiring him to testify as a witness in the criminal case.

Ex-Tesla HR Exec Advises Class of 2026 on Thriving Amid AI Job Disruption

A former Tesla HR executive who scaled the automaker's workforce to 100,000 delivered a commencement address to California State University, San Bernardino's Class of 2026 outlining a five-point strategy for competing in an AI-disrupted labor market. Valerie, who previously led talent acquisition at Handshake, urged graduates to view degrees as "navigational foundations" rather than job guarantees, to partner strategically with AI tools rather than resist them, to emphasize emotional intelligence over automatable tasks, to prioritize in-person networking, and to adopt "back-casting"—working backward from 12-month career goals to identify necessary moves. The speech directly counters narratives that higher education has become obsolete, instead positioning human judgment and contextual empathy as enduring competitive advantages.

Taiwan Court Sentences Ex-Tokyo Electron Engineer to 10 Years for Stealing TSMC Trade Secrets

A Taiwanese court sentenced Chen Li-ming, a former Tokyo Electron and TSMC employee, to 10 years in prison for stealing TSMC's proprietary chip technology to benefit Tokyo Electron's equipment sales. Three other ex-TSMC workers received sentences ranging from 2 to 6 years, while a second Tokyo Electron employee received a suspended 10-month sentence. The court also fined Tokyo Electron's Taiwan subsidiary T$150 million and ordered it to pay TSMC T$100 million in damages. Taiwan's Intellectual Property and Commercial Court issued the ruling on April 27, 2026, under the National Security Act for breaching core national technologies. Most defendants pleaded guilty and retain appeal rights.

LawSnap Briefing Updated May 11, 2026

State of play.

Where things stand.

Latest developments.

Active questions and open splits.

What to watch.

  • Colorado task force output on SB24-205 successor legislation and whether the revised statute addresses DOJ's Equal Protection theory — the May 13 deadline has passed; watch for the draft and any renewed enforcement challenge.
  • Whether the D. Colo. stay in the xAI/DOJ case becomes a permanent injunction, and whether other states with pending algorithmic-bias statutes withdraw or amend in response.
  • New York Department of Labor model agency registration process ahead of the June 19, 2026 effective date — first enforcement actions under the Fashion Workers Act will set the penalty baseline.
  • Whether any federal circuit court addresses the shadow-AI employer-liability question in the context of a data breach or trade-secret misappropriation claim arising from employee use of unsanctioned tools.
  • Super Micro independent investigation findings from Munger Tolles and AlixPartners — scope of management knowledge findings will signal how aggressively DOJ pursues export-control enforcement against technology-sector employees and contractors.
  • Congressional movement on the NO FAKES Act and any federal AI governance legislation ahead of the 2026 midterms, which will determine whether the deregulatory executive posture holds or faces legislative correction.

mail Subscribe to Employment Law email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap