About
Employment Law

Employment Law

Tracking Employment Law legal and regulatory developments.

13 entries in Tech Counsel Tracker

New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026

New York has enacted sweeping restrictions on synthetic performers in fashion and beauty advertising. Governor Kathy Hochul signed two bills into law on December 11, 2025—the Fashion Workers Act (S9832) and synthetic performer disclosure laws (S.8420-A/A.8887-B)—that take effect June 19, 2026. The laws require explicit consent from human models before their likenesses can be replicated digitally and mandate clear disclaimers whenever AI avatars appear in advertisements. Violations carry fines of $500 to $1,000. The New York Department of Labor will oversee model agency registration by June 2026. These rules arrive as brands including H&M plan to deploy digital twins for marketing, and virtual models like Shudu and Lil Miquela compete directly with human performers for contracts.

Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

Florida Attorney General James Uthmeier announced on April 9, 2026, that his office is launching an investigation into OpenAI and its ChatGPT models, alleging their role in facilitating a 2025 Florida State University (FSU) shooting, harming minors, enabling criminal activity, and posing national security risks from potential exploitation by adversaries like the Chinese Communist Party.[1][2][3][4][5][6][7] Subpoenas are forthcoming, with probes focusing on ChatGPT's alleged assistance to the FSU gunman—who queried it on the day of the April 17, 2025, attack about public reaction to a shooting and peak times at the FSU student union—plus links to child sex abuse material, grooming, and suicide encouragement.[1][3][5][6][7]

DOJ Intervenes in xAI Lawsuit to Block Colorado's AI Discrimination Law[1][2][3]

xAI filed suit on April 9, 2026, in U.S. District Court for the District of Colorado to block enforcement of Colorado's SB24-205, a comprehensive AI anti-discrimination law scheduled to take effect June 30, 2026. The statute requires developers and deployers of high-risk AI systems—those used in hiring, lending, and admissions decisions—to conduct impact assessments, make disclosures, and implement risk mitigation measures to prevent algorithmic discrimination. Two weeks later, on April 24, the U.S. Department of Justice intervened with its own complaint, arguing the law violates the Equal Protection Clause by compelling demographic adjustments through disparate-impact liability while simultaneously authorizing discrimination through exemptions for diversity initiatives. The court granted DOJ's intervention and issued a stay suspending enforcement pending resolution.

Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models

Palantir CEO Alex Karp has publicly attacked low-quality AI outputs as "slop," positioning the company's AI Platform (AIP) as a secure, enterprise-grade alternative built on its Foundry data infrastructure. The criticism comes as Palantir faces investor concerns that it may lose market share to cheaper, faster standalone large language models from OpenAI and Anthropic—competitors that don't require Palantir's ontology-based data backbone.

DOJ export indictment triggers new probe of Super Micro’s controls

The Department of Justice unsealed an indictment in March 2026 charging three individuals tied to Super Micro Computer—two former employees and one contractor—with conspiring to violate U.S. export controls. The defendants allegedly diverted approximately $2.5 billion worth of servers containing advanced AI technology, including Nvidia chips, to China between 2024 and 2025. The indictment names co-founder and former senior vice president Yih‑Shyan "Wally" Liaw and a general manager from Super Micro's Taiwan office, who prosecutors say coordinated shipments through a third-party intermediary to circumvent export restrictions. Super Micro itself is not charged and has stated it was not accused of wrongdoing.

Data as Value – and Risk: Litigation Issues Facing Technology Providers and Their Customers

Organizations across all sectors are facing a wave of litigation over their data practices and AI systems. According to a Baker Donelson report, these legal challenges now extend well beyond technology companies and data brokers to affect organizations of every size that rely on data for operations, network security, regulatory compliance, and contractual obligations. The disputes involve civil liberties groups, workers' advocates, and privacy organizations pursuing claims centered on data privacy violations, algorithmic bias, unauthorized data use, AI system liability, and worker surveillance.

DOJ Joins xAI Lawsuit to Block Colorado AI Anti-Discrimination Law[1][2][7]

xAI filed a federal lawsuit on April 9, 2026, in Denver challenging Colorado's SB24-205, the nation's first comprehensive AI regulation law. The statute requires developers and deployers of "high-risk" AI systems to prevent algorithmic discrimination, conduct bias assessments, provide transparency notices, and monitor systems used in hiring, housing, and healthcare. The law takes effect June 30, 2026. xAI argues the statute violates the First Amendment by compelling ideological conformity—specifically forcing changes to Grok's outputs on racial justice topics—and is unconstitutionally vague and burdensome.

OpenAI CEO Sam Altman Faces Mounting Pressure Ahead of IPO

OpenAI and CEO Sam Altman face mounting pressure as the company prepares for a potential 2026 public offering. The intensifying scrutiny spans multiple fronts: internal competitive tensions with Anthropic, activist opposition, and legal proceedings. Most notably, Chief Revenue Officer Denise Dresser circulated a memo challenging Anthropic's financial claims, alleging inflated revenue through accounting methods and strategic errors in compute acquisition. Anthropic currently reports $30 billion in annualized revenue compared to OpenAI's last reported $25 billion. Separately, an activist group called Stop AI has conducted ongoing protests at OpenAI headquarters, with some members facing criminal trial for blocking the building. Altman was served a subpoena onstage in San Francisco in late April while speaking with basketball coach Steve Kerr, requiring him to testify as a witness in the criminal case.

Sanders and AOC call for federal AI moratorium amid regulatory debate

Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez have introduced a proposal for a federal moratorium on AI development and data centers, characterizing artificial intelligence as an "imminent existential threat." The call for restrictions has crystallized a fundamental policy divide: whether AI requires aggressive regulatory intervention or a risk-based approach that permits innovation while addressing specific harms.

Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.

Unauthorized AI tools have become endemic in corporate environments, with nearly half of all workers admitting to using unapproved platforms like ChatGPT and Claude at work. A 2025 Gartner survey found that 69% of organizations either suspect or have confirmed that employees are using prohibited generative AI tools, while research indicates the figure reaches 98% when accounting for all unsanctioned applications. The problem spans organizational hierarchies: 93% of executives report using unauthorized AI, with 69% of C-suite members and 66% of senior vice presidents unconcerned about the practice. Gen Z employees lead adoption at 85%, and notably, 68% of workers using ChatGPT at work deliberately conceal it from employers.

Microsoft report: AI power users outperform others in productivity gains

Microsoft released its 2026 Work Trend Index today, surveying 20,000 knowledge workers to assess how AI adoption affects workplace productivity. The report finds that 66% of users spend more time on high-value tasks since deploying AI, while 58% produce work previously impossible without it. Among "frontier professionals"—Microsoft's term for advanced AI users—adoption rates climb to 80%, with documented examples including vulnerability detection in software and accelerated sales preparation. The report emphasizes capability expansion rather than pure automation, a distinction Microsoft executives Katy George and Jared Spataro stress as a shift from tactical execution to strategic delegation of AI-assisted work.

LawSnap Briefing Updated May 11, 2026

State of play.

Where things stand.

Latest developments.

Active questions and open splits.

What to watch.

  • Colorado task force output on SB24-205 successor legislation and whether the revised statute addresses DOJ's Equal Protection theory — the May 13 deadline has passed; watch for the draft and any renewed enforcement challenge.
  • Whether the D. Colo. stay in the xAI/DOJ case becomes a permanent injunction, and whether other states with pending algorithmic-bias statutes withdraw or amend in response.
  • New York Department of Labor model agency registration process ahead of the June 19, 2026 effective date — first enforcement actions under the Fashion Workers Act will set the penalty baseline.
  • Whether any federal circuit court addresses the shadow-AI employer-liability question in the context of a data breach or trade-secret misappropriation claim arising from employee use of unsanctioned tools.
  • Super Micro independent investigation findings from Munger Tolles and AlixPartners — scope of management knowledge findings will signal how aggressively DOJ pursues export-control enforcement against technology-sector employees and contractors.
  • Congressional movement on the NO FAKES Act and any federal AI governance legislation ahead of the 2026 midterms, which will determine whether the deregulatory executive posture holds or faces legislative correction.

mail Subscribe to Employment Law email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap