About
Artificial Intelligence

Artificial Intelligence

Tracking Artificial Intelligence legal and regulatory developments.

66 entries in Litigator Tracker

DOJ export indictment triggers new probe of Super Micro’s controls

The Department of Justice unsealed an indictment in March 2026 charging three individuals tied to Super Micro Computer—two former employees and one contractor—with conspiring to violate U.S. export controls. The defendants allegedly diverted approximately $2.5 billion worth of servers containing advanced AI technology, including Nvidia chips, to China between 2024 and 2025. The indictment names co-founder and former senior vice president Yih‑Shyan "Wally" Liaw and a general manager from Super Micro's Taiwan office, who prosecutors say coordinated shipments through a third-party intermediary to circumvent export restrictions. Super Micro itself is not charged and has stated it was not accused of wrongdoing.

New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026

New York has enacted sweeping restrictions on synthetic performers in fashion and beauty advertising. Governor Kathy Hochul signed two bills into law on December 11, 2025—the Fashion Workers Act (S9832) and synthetic performer disclosure laws (S.8420-A/A.8887-B)—that take effect June 19, 2026. The laws require explicit consent from human models before their likenesses can be replicated digitally and mandate clear disclaimers whenever AI avatars appear in advertisements. Violations carry fines of $500 to $1,000. The New York Department of Labor will oversee model agency registration by June 2026. These rules arrive as brands including H&M plan to deploy digital twins for marketing, and virtual models like Shudu and Lil Miquela compete directly with human performers for contracts.

DOJ Intervenes in xAI Lawsuit to Block Colorado's AI Discrimination Law[1][2][3]

xAI filed suit on April 9, 2026, in U.S. District Court for the District of Colorado to block enforcement of Colorado's SB24-205, a comprehensive AI anti-discrimination law scheduled to take effect June 30, 2026. The statute requires developers and deployers of high-risk AI systems—those used in hiring, lending, and admissions decisions—to conduct impact assessments, make disclosures, and implement risk mitigation measures to prevent algorithmic discrimination. Two weeks later, on April 24, the U.S. Department of Justice intervened with its own complaint, arguing the law violates the Equal Protection Clause by compelling demographic adjustments through disparate-impact liability while simultaneously authorizing discrimination through exemptions for diversity initiatives. The court granted DOJ's intervention and issued a stay suspending enforcement pending resolution.

Fashion, Beauty, Wearable Brands Face Stricter 2026 Privacy Rules

Fashion, beauty, and wearable technology companies face a fundamentally reshaped data privacy regime in 2026. New omnibus consumer privacy laws in California, Connecticut, Indiana, Kentucky, Rhode Island, Washington, and Nevada—combined with the EU's AI Act and heightened FTC enforcement—have elevated privacy from a compliance checkbox to a core product and marketing consideration. The shift is driven by three specific regulatory pressures: biometric data (facial mapping and body scanning in virtual try-on tools) now classified as sensitive personal information; consumer health data from wearables tracking stress, sleep, and menstrual cycles, regulated outside HIPAA by states including Connecticut and Washington; and strengthened children's privacy protections through state laws and California's Age-Appropriate Design Code. Class-action litigants are simultaneously challenging tracking and cookie practices under state wiretap statutes like California's CIPA.

Dua Lipa sues Samsung for $15M over unauthorized TV ad image use

Singer Dua Lipa sued Samsung for $15 million on May 8, 2026, in federal court in California, alleging copyright infringement, trademark infringement, right of publicity violations, and false endorsement under state law and the Lanham Act. The dispute centers on a backstage photograph taken at the 2024 Austin City Limits Festival—an image Lipa owns—that Samsung allegedly manipulated and used on television packaging and global marketing materials beginning in early 2025 without permission, payment, or her involvement. Lipa claims the placement implied her endorsement of Samsung products and drove sales.

Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

Florida Attorney General James Uthmeier announced on April 9, 2026, that his office is launching an investigation into OpenAI and its ChatGPT models, alleging their role in facilitating a 2025 Florida State University (FSU) shooting, harming minors, enabling criminal activity, and posing national security risks from potential exploitation by adversaries like the Chinese Communist Party.[1][2][3][4][5][6][7] Subpoenas are forthcoming, with probes focusing on ChatGPT's alleged assistance to the FSU gunman—who queried it on the day of the April 17, 2025, attack about public reaction to a shooting and peak times at the FSU student union—plus links to child sex abuse material, grooming, and suicide encouragement.[1][3][5][6][7]

Brockman's Diary Revealed in Musk-OpenAI Trial First Week

Greg Brockman's personal diary emerged this week as central evidence in Elon Musk's lawsuit against OpenAI, with the co-founder and president testifying about his internal deliberations over converting the organization from nonprofit to for-profit status. The diary directly addresses Musk's core claim that OpenAI deceived him by abandoning its original mission to develop artificial intelligence for humanity's benefit. Testimony also revealed inflammatory communications: text messages in which Musk threatened to make Brockman and CEO Sam Altman "the most hated men in America" if no settlement was reached, and a 2017 meeting where Musk tore a painting from the wall after cofounders rejected his demand for majority equity.

Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance

The Mississippi State Bar adopted formal ethics guidance on generative AI use that permits lawyers to reduce verification requirements when using legal-specific tools, provided they have prior experience with the system. Mississippi Ethics Opinion No. 267, adopted verbatim from ABA Formal Opinion 512 issued in July 2024, establishes baseline principles requiring lawyers to protect client confidentiality, use technology competently, verify outputs, bill reasonably, and obtain informed consent. The opinion's core permission—allowing "less independent verification or review" for familiar tools—has drawn sharp criticism for creating standards that contradict the ABA's own cited research.

Federal Court Halts Colorado AI Law Enforcement Days Before June Deadline

A federal magistrate judge in Colorado issued a stay on April 27, 2026, freezing enforcement of the Colorado AI Act (SB24-205) just weeks before its scheduled June 30 effective date. The order prevents the Colorado Attorney General from initiating investigations or enforcement actions under the law, effectively halting one of the country's most comprehensive state AI regulations. Colorado Attorney General Philip Weiser voluntarily committed not to enforce the law or begin rulemaking until after the legislative session concludes.

New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case

Tyrone Blackburn, the attorney representing Liza Gardner in a sexual assault civil suit against Sean "Diddy" Combs, faces a contempt hearing in New Jersey federal court over unpaid sanctions tied to AI-generated case citations. U.S. District Judge Noel L. Hillman ordered Blackburn to pay $6,000 in December 2025—$500 monthly—after finding that a brief he filed contained a fabricated case opinion produced by an artificial intelligence research tool. The case cited did not exist.

Musk-Altman OpenAI trial opens with statements in Oakland court

Jury selection began April 28 in Elon Musk's lawsuit against OpenAI, Sam Altman, Greg Brockman, and Microsoft in U.S. District Court for the Northern District of California in Oakland. Opening statements occurred April 29. Musk alleges OpenAI breached its 2015 nonprofit founding agreement by converting to a for-profit model in 2019 with Microsoft backing, abandoning its stated mission to develop AI for humanity's benefit. He invested $38–45 million in the company. Musk seeks OpenAI's return to nonprofit status, removal of Altman and Brockman from leadership, and $134–150 billion in damages to be redirected to OpenAI's charitable arm.

Tesla Owners Sue Over Unfulfilled FSD Promises on HW3 Hardware

Tesla faces coordinated class-action litigation across multiple jurisdictions from owners of Hardware 3-equipped vehicles manufactured between 2016 and 2024. The plaintiffs allege that Tesla and Elon Musk made false representations that these vehicles would achieve full self-driving capability through software updates alone. A spring 2026 software release exposed Hardware 3's technical limitations, effectively excluding millions of owners from advanced autonomous features now reserved for newer Hardware 4 systems. The lead case, brought by retired attorney Tom LoSavio, centers on buyers who paid $8,000 to $12,000 for full self-driving capability that is now incompatible with their vehicles without costly hardware retrofits Tesla has not formally offered. Similar suits have been filed in Australia, the Netherlands, across Europe, and in California, where one action involves approximately 3,000 plaintiffs. Globally, the disputes affect roughly 4 million vehicles.

Elon Musk Testifies OpenAI Stole Charity by Going For-Profit in Lawsuit[1][2]

Elon Musk testified April 28 in a California courtroom that OpenAI breached a foundational promise by converting from nonprofit to for-profit status. Now valued at $852 billion, OpenAI made the shift despite Musk's 2017 warning that the company should either remain nonprofit or operate independently. "It is not OK to steal a charity," Musk told the court, referencing email exchanges with Sam Altman in which Altman expressed support for the nonprofit model but acknowledged no legal obligation bound the company to it permanently.

Articles Warn Clients Against Feeding Privileged Docs to Consumer AI

On May 8, 2026, The National Law Review and Varnum LLP published advisory articles warning clients against misusing consumer AI tools in legal matters. The pieces detail a specific risk: uploading privileged documents—draft agreements, legal memos, work product—into platforms like ChatGPT or Claude waives attorney-client privilege by exposing confidential information to third parties with no confidentiality obligations. The articles also caution that AI models tend to validate user assumptions rather than provide objective legal analysis, making them unreliable validators of legal advice.

Colorado’s Impending AI Law Thrown Into More Doubt By Court Ruling: What Will Happen Before June 30 Effective Date?

A federal magistrate judge issued a temporary restraining order on April 27, 2026, blocking Colorado from enforcing its artificial intelligence antidiscrimination law (SB 24-205). The order freezes all state investigations and enforcement actions while litigation proceeds and shields companies from penalties for violations occurring within 14 days after the court rules on a preliminary injunction motion. The law was set to take effect June 30.

DOJ Joins xAI Lawsuit to Block Colorado AI Anti-Discrimination Law[1][2][7]

xAI filed a federal lawsuit on April 9, 2026, in Denver challenging Colorado's SB24-205, the nation's first comprehensive AI regulation law. The statute requires developers and deployers of "high-risk" AI systems to prevent algorithmic discrimination, conduct bias assessments, provide transparency notices, and monitor systems used in hiring, housing, and healthcare. The law takes effect June 30, 2026. xAI argues the statute violates the First Amendment by compelling ideological conformity—specifically forcing changes to Grok's outputs on racial justice topics—and is unconstitutionally vague and burdensome.

FTC and Congress intensify surveillance pricing crackdown amid state legislative wave

Federal regulators and lawmakers are moving aggressively against surveillance pricing—the practice of using consumer data to set individualized prices for identical products or services. In April 2026, FTC leadership told Congress that staff work on the issue continues, with the agency considering whether new disclosure requirements should apply to highly personalized, data-driven pricing. That same month, the House Oversight Committee launched a formal investigation, sending letters to major travel and platform companies demanding documentation on revenue management algorithms, consumer data practices, and testing protocols.

Data as Value – and Risk: Litigation Issues Facing Technology Providers and Their Customers

Organizations across all sectors are facing a wave of litigation over their data practices and AI systems. According to a Baker Donelson report, these legal challenges now extend well beyond technology companies and data brokers to affect organizations of every size that rely on data for operations, network security, regulatory compliance, and contractual obligations. The disputes involve civil liberties groups, workers' advocates, and privacy organizations pursuing claims centered on data privacy violations, algorithmic bias, unauthorized data use, AI system liability, and worker surveillance.

LegalPlace Secures €70M; Jurisphere Raises $2.2M for Global Expansion

French legal tech platform LegalPlace closed a €70 million funding round, marking the largest capital raise in recent legal tech activity. The Paris-based business formation platform, which helps entrepreneurs launch companies online, is capitalizing on France's growing legal tech sector. Separately, Jurisphere.ai, an India-based startup founded in 2024 by Manas Khandelwal, Varun Khandelwal, and Sumit Ghosh, secured $2.2 million in seed funding from backers including InfoEdge Ventures, Flourish Ventures, Antler, and 8i Ventures. Jurisphere offers AI-native legal research, drafting, and document review tools built for Indian legal workflows and now serves over 500 teams.

Alston & Bird Publishes April 2026 AI Quarterly Review of Key U.S. Laws and Policies

Congress moved on two fronts in late March to shape AI regulation. On March 26, bipartisan lawmakers introduced H.R. 8094, the AI Foundation Model Transparency Act, requiring developers of large language models to disclose training methods, purposes, risks, evaluation protocols, and monitoring practices. The bill imposes no affirmative regulation—only disclosure obligations. One week earlier, the Trump Administration released its National Policy Framework for Artificial Intelligence, a non-binding document recommending Congress adopt unified federal standards across seven areas: child protection, AI infrastructure, intellectual property, free speech, innovation, workforce development, and preemption of state law. The framework followed Senator Marsha Blackburn's March 18 discussion draft of the Trump America AI Act, which would codify President Trump's December 2025 executive order directing federal preemption of state AI laws.

SDNY Rules AI Tools Waive Privilege in US v. Heppner

A federal judge in Manhattan has ruled that a financial services executive waived attorney-client privilege and work product protection by using Anthropic's Claude AI tool without his lawyers' involvement. In United States v. Heppner, Judge Jed S. Rakoff ordered disclosure of 31 strategy documents the defendant generated after inputting case details derived from attorney communications. The court found that Claude, as a non-attorney third party, lacks fiduciary duties, and that Anthropic's privacy policy—which permits data use for training and third-party sharing—destroyed any reasonable expectation of confidentiality. This marks the first federal decision of its kind, rejecting the defendant's argument that later sharing the materials with counsel could retroactively restore privilege protection.

White House pushes federal AI review standards to eliminate "ideological bias"

The Trump administration has established federal review procedures for artificial intelligence systems across government agencies through an executive order titled "Preventing Woke AI in the Federal Government," issued in July 2025 alongside America's AI Action Plan. The order requires federal agencies to implement "Unbiased AI Principles" for large language models in procurement decisions. The Office of Management and Budget must issue implementing guidance within 90 days, after which agencies have an additional 90 days to revise existing contracts and adopt compliance procedures.

StrongSuit CEO Warns of AI Automation Risks in High-Stakes Litigation

Justin McCallon, CEO of StrongSuit, published commentary on Law360 arguing that AI-driven automation will reshape legal work—but only if it clears a uniquely high bar. Unlike most industries, litigation tolerates near-zero error rates. McCallon positioned StrongSuit's platform, which automates legal research, drafting, and document review, as engineered specifically for this constraint rather than general-purpose AI capability.

Q1 2026 AI Agents Spark IP Debates in Software Development

In the first quarter of 2026, autonomous AI workflow agents including Openclaw demonstrated the ability to generate production-ready software directly from user specifications. The capability triggered immediate debate over intellectual property ownership, developer liability, and the legal framework governing self-generating code.

Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models

Palantir CEO Alex Karp has publicly attacked low-quality AI outputs as "slop," positioning the company's AI Platform (AIP) as a secure, enterprise-grade alternative built on its Foundry data infrastructure. The criticism comes as Palantir faces investor concerns that it may lose market share to cheaper, faster standalone large language models from OpenAI and Anthropic—competitors that don't require Palantir's ontology-based data backbone.

Venable Podcast Examines AI-IP Law Differences in China, UK, US

Venable LLP hosted a special episode of its podcast AI and IP: The Legal Frontier on April 30, 2026, bringing together Justin Pierce (co-chair of Venable's Intellectual Property Division), Jason Yao of China's Wanhuida law firm, and Toby Bond of UK-based Bird & Bird to examine how artificial intelligence is fracturing intellectual property law across jurisdictions. The discussion centered on three distinct regulatory approaches: China's willingness to protect AI-generated outputs when meaningful human input is present; the UK and EU's insistence on human authorship and originality; and the US framework built on human contribution and fair use doctrine.

Federal Court Rules AI Chatbot Communications Not Protected by Attorney-Client Privilege

On February 17, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York ruled in United States v. Heppner that a criminal defendant's communications with Anthropic's Claude AI platform were not protected by attorney-client privilege or work product doctrine. The defendant had used the public chatbot to create analysis documents after receiving a grand jury subpoena, then claimed privilege when sharing them with counsel. The court ordered disclosure to the government.

Judge Fines Lindell Lawyer $5K for 2nd False Case Citation

U.S. District Judge Nina Y. Wang sanctioned attorney Christopher Kachouroff and his law firm $5,000 on May 8, 2026, for submitting a defamation brief with a materially incorrect citation while defending MyPillow CEO Mike Lindell. The error was obvious and reflected failure to reasonably review the document before filing, Wang ruled, rejecting Kachouroff's human error explanation. Lindell, his media company, and co-counsel Jennifer T. DeMaster escaped penalty on this sanction, though DeMaster faced consequences in an earlier ruling.

Sanders and AOC call for federal AI moratorium amid regulatory debate

Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez have introduced a proposal for a federal moratorium on AI development and data centers, characterizing artificial intelligence as an "imminent existential threat." The call for restrictions has crystallized a fundamental policy divide: whether AI requires aggressive regulatory intervention or a risk-based approach that permits innovation while addressing specific harms.

Law Firms Urged to Educate Staff on AI Amid Client Pressures

Law firms are hemorrhaging money on artificial intelligence tools they don't understand and won't use, according to analysis published May 4, 2026, in Above the Law and Tech Law Crossroads. Firms facing client pressure to deploy AI are panic-buying software without first establishing internal competency—resulting in wasted spending, abandoned platforms, and disappointed clients. The core problem: decision-makers lack basic literacy on how AI actually works, what it can and cannot do, and which tools fit specific practice needs. The recommended fix is straightforward: mandatory education on AI fundamentals for lawyers, firm leadership, and business development staff before any vendor selection or client pitch.

Indiana Judge Rules AI Cannot Substitute for Attorney Review in Discovery

On April 14, 2026, Magistrate Judge Tim A. Baker of the U.S. District Court for the Southern District of Indiana issued an order in White v. Walmart (Case No. 25-cv-01120) sanctioning plaintiff's counsel for relying exclusively on artificial intelligence to identify deficiencies in the defendant's discovery responses. The court held that while AI can serve as a useful tool, it cannot substitute for attorney judgment and does not satisfy the Federal Rules of Civil Procedure's requirement that parties meet and confer in good faith before escalating discovery disputes.

Oregon Appellate Court Sanctions Lawyer with $10K Fine for AI-Hallucinated Brief Citations

The Oregon Court of Appeals has sanctioned Salem attorney William Ghiorso with a $10,000 fine for submitting an opening brief containing at least 15 fabricated case citations and 9 nonexistent quotations. The court attributed the errors to AI "hallucinations"—instances where generative AI generated convincing but false legal information. The penalty marks the first time an Oregon appellate court has considered attorney fees as a sanction alternative to fines, though it ultimately imposed the monetary penalty after Ghiorso implemented new safeguards.

Seven Families Sue OpenAI Over Suspect's ChatGPT Use in 2025 FSU Shooting

Seven families of victims from a 2025 Florida State University mass shooting have filed lawsuits against OpenAI, claiming the company negligently failed to alert law enforcement about the suspect's extensive ChatGPT interactions. The suits allege that Phoenix Ikner, the accused gunman now awaiting trial, maintained constant communication with the chatbot and may have received guidance on executing the attack. The families are pursuing negligence claims, arguing OpenAI breached its duty of care by failing to flag foreseeable harm despite the chatbot's design and the nature of the interactions.

China's SPP Releases First Bilingual 2025 IP Prosecution White Paper

China's Supreme People's Procuratorate released its first bilingual White Paper on Intellectual Property Prosecution Work on April 21, 2026, documenting enforcement activity across criminal, civil, administrative, and public interest litigation. The SPP reported accepting or reviewing 11,341 criminal IP infringement cases involving 25,160 individuals in 2025, prosecuting 9,135 cases with 19,102 defendants while declining to prosecute 5,105. The agency also handled 1,251 civil IP cases, 1,795 administrative cases, and 612 public interest cases. Simultaneously, the SPP issued 10 model cases in emerging sectors including chip manufacturing, photovoltaics, and industrial software, along with an annual report on IP crimes.

From Human-in-the-Loop to Human-at-the-Helm: Navigating the Ethics of Agentic AI

The legal profession is shifting from reactive oversight of AI systems to proactive governance designed for autonomous tools. As artificial intelligence has evolved from generative systems that produce text on demand to agentic systems capable of independent action—sending emails, populating filings, modifying records—the traditional model of lawyers reviewing AI output after completion has become inadequate. Legal ethics experts are now calling for "human-at-the-helm" governance that establishes parameters and controls what AI is permitted to do before it acts, rather than inspecting results afterward.

AI Disrupts Law Firm Billable Hour Model, Boosting Efficiency

Legal AI tools are reshaping law firm economics. Document review, drafting, and research are now 60–70% faster, with individual attorneys expected to save 190–240 billable hours annually. Thomson Reuters' 2025 Future of Professionals Report quantifies this as $20–32 billion in time savings across the U.S. market. Major clients—Meta, Zscaler, UBS—are already demanding "AI discounts" and refusing to pay for work automatable by machine. The pressure is immediate and client-driven.

USPTO Launches AI Image Search Tool for Trademark Clearance

The U.S. Patent and Trademark Office launched a beta AI-powered image search tool in April 2026 that lets users upload images to retrieve visually similar marks from the federal register. Accessed through a camera icon on the trademark search system, the tool functions like reverse image search—users log into their USPTO.gov account, upload an image or link, and receive results showing marks with related design elements. The USPTO announced the tool alongside other AI enhancements, including a mark description generator and the Trademark Classification Agentic Codification Tool (Class ACT), which automates backend classification work that previously took months.

Culture is where AI strategy goes to die. Here’s how to jump-start an AI-ready culture in 90 days

A 90-day cultural transformation framework has emerged as an alternative to mass workforce replacement during AI adoption, directly responding to IgniteTech CEO Eric Vaughan's controversial 2025 decision to terminate approximately 80% of his staff after employees resisted AI tools despite substantial training investment. Organizational researchers and business leaders have synthesized a three-phase approach—Diagnose, Rewire, Embed—designed to build AI-ready cultures without layoffs. The framework rests on a core finding: cultural misalignment, not technological incapacity, drives AI transformation failures. Writer's 2025 enterprise AI adoption report documents that nearly one-third of employees actively sabotage AI rollouts, with resistance particularly pronounced among technical staff and Gen Z workers (41% report active sabotage).

Luke Littler Seeks UK Trade Mark Registration for His Face

In early March 2026, darts World Champion Luke Littler filed an application with the UK Intellectual Property Office to register his face as a trademark across multiple product and service categories, including computer games, video games, and dartboard lights. The filing reflects a broader shift among high-profile individuals seeking facial trademark protection against unauthorized use and generative AI replication.

Trump Admin Releases National AI Framework on March 20, 2026

On March 20, 2026, the Trump administration released the "National Policy Framework for Artificial Intelligence: Legislative Recommendations," a detailed statutory blueprint that would establish uniform federal AI policy and preempt most state regulations. The Framework, mandated by an December 2025 executive order, proposes that Congress delegate AI development oversight to existing sector-specific agencies rather than create a new federal regulator. It would allow states limited authority only in narrow areas: child safety, fraud prevention, zoning, and government procurement. The administration has tasked the Department of Justice with challenging state AI laws through a dedicated task force, while the Department of Commerce will evaluate state regulations deemed "onerous," and the Federal Trade Commission will enforce preemption policies on deceptive practices.

Workers File 7 Class-Action Lawsuits Against Mercor Over Data Breach Exposure[1][2]

Mercor, a $10 billion San Francisco AI startup that supplies training data to OpenAI, Anthropic, and Meta, is defending itself against at least seven class-action lawsuits filed in recent weeks. The suits stem from a data breach last month that exposed contractor information including recorded job interviews, facial biometric data, computer screenshots, and background checks. Plaintiffs allege Mercor violated federal privacy regulations by collecting extensive data through monitoring software like Insightful, sharing it with AI partners, and using interviews and proprietary materials to train models without adequate consent or disclosure.

Meloni Posts AI-Generated Nude to Warn of Deepfake Danger

On May 5, 2026, Italian Prime Minister Giorgia Meloni reposted an AI-generated image of herself in lingerie across her social media accounts—deliberately amplifying a fake that had circulated online. Rather than ignore it, she republished the image herself with a warning about synthetic media dangers, joking that the creators had "improved" her appearance. The move was framed as a public service announcement demonstrating how convincingly AI can fabricate imagery.

Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.

Unauthorized AI tools have become endemic in corporate environments, with nearly half of all workers admitting to using unapproved platforms like ChatGPT and Claude at work. A 2025 Gartner survey found that 69% of organizations either suspect or have confirmed that employees are using prohibited generative AI tools, while research indicates the figure reaches 98% when accounting for all unsanctioned applications. The problem spans organizational hierarchies: 93% of executives report using unauthorized AI, with 69% of C-suite members and 66% of senior vice presidents unconcerned about the practice. Gen Z employees lead adoption at 85%, and notably, 68% of workers using ChatGPT at work deliberately conceal it from employers.

GrayRobinson Faces Class Action Over 2025 Data Breach Negligence

GrayRobinson, P.A., a Florida-based law firm, disclosed a data breach affecting 65,113 individuals between March 5 and March 24, 2025. Unauthorized actors accessed the firm's network during that period, potentially exposing names, Social Security numbers, and other sensitive personal information. The firm detected the intrusion on March 24, secured its systems, notified law enforcement, and retained third-party investigators. A forensic review completed in April 2026 confirmed the exposure, and GrayRobinson sent breach notices on April 24, 2026. The firm is offering two years of free identity monitoring through Experian. No evidence of actual misuse has emerged.

Article Shares Tips for Collaborating with Counterparties on AI in Contract Talks

A National Law Review contributor published practical guidance on April 28, 2026, for managing AI-assisted contract negotiations with counterparties. The article recommends four core strategies: asking counterparties directly whether they are using AI tools, providing detailed context to improve AI-generated outputs, anticipating how AI systems will respond to specific proposals, and reframing negotiations around shared objectives rather than adversarial positioning. The piece reflects a market shift toward AI-powered contract platforms—including tools from Clio, Ironclad, Bind, and GC.ai—that automate redlining, clause comparison, and deviation tracking. These systems have reduced contract review cycles from 30 to 90 minutes per round to seconds, with firms reporting 30 to 50 percent faster negotiations overall.

Washington Gov. Ferguson Signs HB 2225 Requiring AI Companion Chatbot Disclosures

Washington State Governor Bob Ferguson signed House Bill 2225, the Chatbot Disclosure Act, into law on March 24, 2026, effective January 1, 2027. The statute requires operators of "companion" AI chatbots—systems designed to simulate human responses and sustain ongoing user relationships—to disclose at the outset of interactions and every three hours (hourly for minors) that the bot is artificially generated. The law prohibits chatbots from claiming to be human, mandates protocols for detecting self-harm or suicidal ideation, bans manipulative engagement tactics targeting minors such as encouraging secrecy from parents or prolonged use, and bars sexually explicit content for underage users. Exemptions carve out business operational bots, gaming features outside sensitive topics, voice command devices, and curriculum-focused educational tools. Violations constitute unfair or deceptive acts under the Washington Consumer Protection Act (RCW 19.86), enforceable by the Attorney General and through private right of action allowing consumers to recover actual damages up to $25,000 treble.

Stanford study finds 35% of new websites AI-generated by May 2025

A collaborative study by Stanford University, Imperial College London, and the Internet Archive has quantified the rapid proliferation of AI-generated content online. Analyzing web pages from 2022 through May 2025 using the Wayback Machine and AI-detection methods, researchers found that 35.3% of newly published websites were AI-generated or AI-assisted, with 17.6% fully AI-generated. Stanford AI researcher Jonáš Doležal characterized the speed of this shift as "staggering" in recent interviews.

College Student Sues Meete Dating App for Repurposing Her TikTok Video in Ads

A University of Tennessee nursing student has sued Meete, a dating app operated by British Virgin Islands–based Quantum Communications, alleging the company stole her public TikTok graduation video and weaponized it for targeted advertising. Elena Lunglhofer claims Meete overlaid the video with app graphics, added a synthetic voiceover in which she appeared to solicit men for casual encounters, and used geotargeting to serve the ad on Snapchat to users near her campus, including residents of her dormitory. She discovered the misuse when a male student showed her screenshots of the ad. Attorney Abe Pafford filed suit on April 28, 2026, in Tennessee state court, asserting claims for misappropriation of likeness, right of publicity violations, and emotional distress.

CT AG Tong Issues Feb. 25 Memo Applying Existing Laws to AI

Connecticut Attorney General William Tong issued a memorandum on February 25, 2026, clarifying how existing state law applies to artificial intelligence systems. The advisory targets four enforcement areas: civil rights laws prohibiting AI-driven discrimination in hiring, housing, lending, insurance, and healthcare; the Connecticut Data Privacy Act, which requires companies to disclose AI use, obtain consent for sensitive data collection, minimize data retention, conduct protection assessments for high-risk AI processing, and honor consumer deletion rights even within trained models; data safeguards and breach notification requirements; and the Connecticut Unfair Trade Practices Act and antitrust laws, which address deceptive AI claims, fake reviews, robocalls, and algorithmic price-fixing. The memorandum applies broadly to any business deploying AI in consequential decisions and specifically references harms including AI-generated nonconsensual imagery on platforms like xAI's Grok.

AI-Powered Wire Fraud Surges as Deepfakes and Social Engineering Overwhelm Traditional Defenses

AI-powered fraud has emerged as the dominant financial crime threat in 2026, with cybercriminals using deepfake technology and generative AI to impersonate executives and trusted contacts in wire transfer schemes. Business email compromise attacks have surged 1,760% since generative AI became widely available. A single deepfake video call cost engineering firm Arup $25.6 million. These attacks are particularly dangerous because victims remain genuinely authenticated and security controls register as fully operational, making detection extraordinarily difficult.

Clio Report: 71% of Small Law Firms Use AI, But Revenue Growth Lags Larger Competitors

Clio's 2026 Legal Trends report exposes a widening performance gap between small law firms and their larger competitors despite widespread AI adoption. While 71% of solo practitioners and 75% of small firms now use AI tools, fewer than 33% have increased revenues—a sharp contrast to enterprise firms where nearly 60% report revenue growth tied to AI implementation.

AI Drives 85K Tech Layoffs in 2026 Despite Overall Job Cut Decline

Technology companies eliminated over 85,000 jobs in the first four months of 2026 explicitly attributed to AI adoption, marking a sharp acceleration from 2025's 55,000 AI-linked cuts. Amazon, Accenture, Atlassian, Coinbase, Snap, Block, and Oracle announced reductions ranging from 10 to 30 percent of their workforces, with executives citing automation, operational efficiency, and repositioning for an "AI era." The cuts span entry-level through mid-career roles in programming, customer service, and administrative functions. WARN notices and SEC filings document the reductions, though no federal legislation or agency action has been triggered.

Freshfields Signs Multi-Year AI Partnership with Anthropic for Claude Deployment[1][2][3]

Freshfields Bruckhaus Deringer announced a multi-year partnership with Anthropic on April 23, 2026, to deploy Claude AI models across its 33 offices and 5,700 employees. The rollout will occur through Freshfields' proprietary AI platform, with the firm and Anthropic jointly developing legal-specific workflows and agentic tools for contract review, legal research, due diligence, and document drafting. Usage of Claude surged 500% within the first six weeks of deployment. The partnership roadmap includes early access to new Anthropic models and expansion to Anthropic's Cowork agentic platform. Freshfields Lab, led by Partner and Co-Head Gerrit Beckhaus, is driving the collaboration alongside Anthropic's legal and product teams.

Greenhouse Survey Reveals 64% of Job Seekers Have AI Interviews, 38% Drop Out

Nearly two-thirds of U.S. job seekers have been interviewed by AI during hiring, according to a new report from Greenhouse, a hiring platform that surveyed approximately 1,200 workers. The figure represents a 13 percentage point jump from six months prior. The survey revealed substantial candidate attrition: 38% abandoned hiring processes involving AI interviews, while another 12% said they would do so if given the option.

OpenAI CEO Sam Altman Faces Mounting Pressure Ahead of IPO

OpenAI and CEO Sam Altman face mounting pressure as the company prepares for a potential 2026 public offering. The intensifying scrutiny spans multiple fronts: internal competitive tensions with Anthropic, activist opposition, and legal proceedings. Most notably, Chief Revenue Officer Denise Dresser circulated a memo challenging Anthropic's financial claims, alleging inflated revenue through accounting methods and strategic errors in compute acquisition. Anthropic currently reports $30 billion in annualized revenue compared to OpenAI's last reported $25 billion. Separately, an activist group called Stop AI has conducted ongoing protests at OpenAI headquarters, with some members facing criminal trial for blocking the building. Altman was served a subpoena onstage in San Francisco in late April while speaking with basketball coach Steve Kerr, requiring him to testify as a witness in the criminal case.

Ex-Tesla HR Exec Advises Class of 2026 on Thriving Amid AI Job Disruption

A former Tesla HR executive who scaled the automaker's workforce to 100,000 delivered a commencement address to California State University, San Bernardino's Class of 2026 outlining a five-point strategy for competing in an AI-disrupted labor market. Valerie, who previously led talent acquisition at Handshake, urged graduates to view degrees as "navigational foundations" rather than job guarantees, to partner strategically with AI tools rather than resist them, to emphasize emotional intelligence over automatable tasks, to prioritize in-person networking, and to adopt "back-casting"—working backward from 12-month career goals to identify necessary moves. The speech directly counters narratives that higher education has become obsolete, instead positioning human judgment and contextual empathy as enduring competitive advantages.

Taiwan Court Sentences Ex-Tokyo Electron Engineer to 10 Years for Stealing TSMC Trade Secrets

A Taiwanese court sentenced Chen Li-ming, a former Tokyo Electron and TSMC employee, to 10 years in prison for stealing TSMC's proprietary chip technology to benefit Tokyo Electron's equipment sales. Three other ex-TSMC workers received sentences ranging from 2 to 6 years, while a second Tokyo Electron employee received a suspended 10-month sentence. The court also fined Tokyo Electron's Taiwan subsidiary T$150 million and ordered it to pay TSMC T$100 million in damages. Taiwan's Intellectual Property and Commercial Court issued the ruling on April 27, 2026, under the National Security Act for breaching core national technologies. Most defendants pleaded guilty and retain appeal rights.

BakerHostetler Podcast on USPTO's AI Strategy and Guidance Evolution[12][15]

BakerHostetler released a podcast in April 2026 synthesizing the USPTO's evolving approach to artificial intelligence across patent operations, policy, and practice. The discussion centers on the agency's January 2025 Artificial Intelligence Strategy, which established five pillars: fostering responsible AI innovation, enhancing intellectual property policies, building AI infrastructure, promoting ethical use, and developing workforce expertise. The strategy builds on Executive Order 14110 (October 2023), which directed the USPTO to issue guidance on AI inventorship and patent eligibility. The agency has since revised its inventorship standards to require significant human contribution and bar AI as an independent inventor, and updated patent eligibility determinations under the Alice/Mayo framework in July 2024. Internally, the USPTO deployed SCOUT, a generative AI tool used by over 200 examiners for prior art analysis and cybersecurity tasks.

LawSnap Briefing Updated May 11, 2026

State of play.

  • The Trump DOJ has intervened to block Colorado's SB24-205, the nation's first comprehensive algorithmic discrimination law, joining xAI's federal challenge and securing a stay of enforcement pending resolution — establishing federal preemption as the administration's posture toward state AI regulation (→ DOJ Intervenes in xAI Lawsuit to Block Colorado's AI Discrimination Law[1][2][3], DOJ Joins xAI Lawsuit to Block Colorado AI Anti-Discrimination Law[1][2][7]).
  • The Musk v. OpenAI trial is in active testimony, with Greg Brockman's personal diary introduced as evidence against Musk's deception theory — the case will set precedent on fiduciary duties owed to departed board members in AI ventures and on the enforceability of nonprofit founding commitments (→ Brockman's Diary Revealed in Musk-OpenAI Trial First Week).
  • New York's synthetic performer consent laws take effect June 19, 2026, requiring explicit model consent before digital replication and mandatory AI disclaimers in advertising — with California's parallel statutes and a pending federal NO FAKES Act creating a fragmented multi-regime compliance picture (→ New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026).
  • The Florida AG has opened a formal investigation into OpenAI and ChatGPT, citing national security concerns and a claimed connection to the FSU shooting — the most concrete state enforcement action against an AI developer to date (→ Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting).
  • For counsel advising AI developers, enterprise deployers, or clients with AI-facing workforce exposure, the practical baseline is simultaneous pressure from three directions: federal preemption of state AI regulation, active state AG enforcement through existing authority, and imminent compliance deadlines on synthetic performer and biometric data rules.

Where things stand.

Latest developments.

Active questions and open splits.

  • Federal preemption vs. state AI regulation authority. The DOJ's Colorado intervention raises unresolved questions about the outer boundary of state power to regulate algorithmic systems — whether Equal Protection, First Amendment, and Commerce Clause theories can invalidate impact-assessment and bias-disclosure mandates, and whether the resulting precedent extends to other state AI laws.
  • Scope of state AG enforcement through existing law. The Florida AG's national-security-plus-mass-casualty theory against OpenAI is untested — if it produces a complaint, it could become a template for AGs in other states to reach AI developers without waiting for AI-specific legislation.
  • Fiduciary duties in AI venture governance. The Musk v. OpenAI trial will test whether departed board members can enforce founding-era commitments, what disclosure obligations attach to nonprofit-to-for-profit conversions, and how courts treat informal founder agreements in rapidly scaling technology companies.
  • Synthetic performer consent and federal preemption collision. New York and California have enacted consent mandates; a White House EO seeks federal preemption of conflicting state AI laws; the NO FAKES Act is pending — the interaction among these regimes is unresolved, leaving brands and agencies operating under simultaneous and potentially inconsistent obligations.
  • Agentic AI malpractice exposure and tiered oversight standards. As firms deploy autonomous systems capable of filing documents and sending communications, no settled professional responsibility standard defines what "human-at-the-helm" governance requires — creating a gap between emerging best-practice frameworks and enforceable ethical rules.
  • Enterprise AI contract renegotiation triggers. The tension between integrated-platform vendors like Palantir and commodity LLM alternatives raises live questions about whether performance, pricing, or governance changes in the AI market constitute material changes justifying contract renegotiation or termination for convenience.
  • Employment liability differentiation between mass-termination and reskilling strategies. Courts have not yet addressed whether an employer's failure to implement structured AI reskilling before resorting to mass layoffs affects WARN Act, wrongful termination, or disparate-impact exposure — but the factual record is accumulating.

What to watch.

  • Whether the Colorado district court makes the SB24-205 enforcement stay permanent, and whether Colorado's successor legislation satisfies DOJ's Equal Protection theory — the outcome will define the federal-state AI regulation boundary for other jurisdictions.
  • The Musk v. OpenAI verdict on breach of contract and fiduciary duty claims — particularly how the court treats Brockman's diary testimony and what standard it applies to founder-era commitments.
  • Whether the Florida AG converts its OpenAI investigation into a formal complaint, and whether other state AGs adopt the national-security framing as an enforcement vehicle.
  • June 19, 2026 compliance deadline for New York's synthetic performer laws — watch for early enforcement actions and whether the DOJ moves to preempt under the December 2025 EO.
  • EU AI Act labeling requirements taking effect August 2026 — brands with cross-border advertising exposure face simultaneous New York, California, and EU obligations with no harmonized compliance framework.
  • Whether enterprise AI adoption resistance — documented in the Writer and KPMG surveys — produces the first wave of employment litigation testing the reskilling-vs.-termination liability distinction.

mail Subscribe to Artificial Intelligence email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap