About
AI Generated Content IP

AI Generated Content IP

Tracking Ai Generated Content Ip legal and regulatory developments.

6 entries in Corporate Counsel Tracker

New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026

New York has enacted sweeping restrictions on synthetic performers in fashion and beauty advertising. Governor Kathy Hochul signed two bills into law on December 11, 2025—the Fashion Workers Act (S9832) and synthetic performer disclosure laws (S.8420-A/A.8887-B)—that take effect June 19, 2026. The laws require explicit consent from human models before their likenesses can be replicated digitally and mandate clear disclaimers whenever AI avatars appear in advertisements. Violations carry fines of $500 to $1,000. The New York Department of Labor will oversee model agency registration by June 2026. These rules arrive as brands including H&M plan to deploy digital twins for marketing, and virtual models like Shudu and Lil Miquela compete directly with human performers for contracts.

Q1 2026 AI Agents Spark IP Debates in Software Development

In the first quarter of 2026, autonomous AI workflow agents including Openclaw demonstrated the ability to generate production-ready software directly from user specifications. The capability triggered immediate debate over intellectual property ownership, developer liability, and the legal framework governing self-generating code.

Dua Lipa sues Samsung for $15M over unauthorized TV ad image use

Singer Dua Lipa sued Samsung for $15 million on May 8, 2026, in federal court in California, alleging copyright infringement, trademark infringement, right of publicity violations, and false endorsement under state law and the Lanham Act. The dispute centers on a backstage photograph taken at the 2024 Austin City Limits Festival—an image Lipa owns—that Samsung allegedly manipulated and used on television packaging and global marketing materials beginning in early 2025 without permission, payment, or her involvement. Lipa claims the placement implied her endorsement of Samsung products and drove sales.

Stanford study finds 35% of new websites AI-generated by May 2025

A collaborative study by Stanford University, Imperial College London, and the Internet Archive has quantified the rapid proliferation of AI-generated content online. Analyzing web pages from 2022 through May 2025 using the Wayback Machine and AI-detection methods, researchers found that 35.3% of newly published websites were AI-generated or AI-assisted, with 17.6% fully AI-generated. Stanford AI researcher Jonáš Doležal characterized the speed of this shift as "staggering" in recent interviews.

Venable Podcast Examines AI-IP Law Differences in China, UK, US

Venable LLP hosted a special episode of its podcast AI and IP: The Legal Frontier on April 30, 2026, bringing together Justin Pierce (co-chair of Venable's Intellectual Property Division), Jason Yao of China's Wanhuida law firm, and Toby Bond of UK-based Bird & Bird to examine how artificial intelligence is fracturing intellectual property law across jurisdictions. The discussion centered on three distinct regulatory approaches: China's willingness to protect AI-generated outputs when meaningful human input is present; the UK and EU's insistence on human authorship and originality; and the US framework built on human contribution and fair use doctrine.

LawSnap Briefing Updated May 6, 2026

State of play.

  • The AI training copyright litigation wave has reached a new escalation point. Five major publishers—Elsevier, Cengage, Hachette, Macmillan, and McGraw Hill—filed a class-action against Meta and CEO Mark Zuckerberg personally in Manhattan federal court, alleging systematic use of pirated repositories including LibGen and Anna's Archive to train Llama, plus deliberate stripping of copyright-management information (→ Five Major Publishers Sue Meta for Using Pirated Books to Train Llama AI).
  • The human authorship floor is now settled at the Supreme Court level. The Court denied certiorari in Thaler v. Perlmutter, leaving intact the D.C. Circuit's ruling that purely autonomous AI outputs receive no copyright protection—while leaving the hybrid human-AI collaboration question open for future litigation .
  • Anthropic is simultaneously a defendant and a rights enforcer. The company argues transformative fair use in active California litigation while its $1.5 billion Bartz class settlement—covering over 100,000 authors—moves toward a fairness hearing; separately, Anthropic issued 8,000+ DMCA takedowns after its own Claude Code source leaked via npm (→ Anthropic argues Claude's copyright use is transformative fair use in CA court).
  • Jurisdictional fragmentation is the defining structural problem. China protects AI outputs with meaningful human input; the UK and EU require human authorship and originality; the US applies human contribution plus fair use doctrine—with no harmonization in sight (→ Venable Podcast Examines AI-IP Law Differences in China, UK, US).
  • For counsel advising AI developers, content owners, or enterprise deployers, the practical baseline is that training data provenance is now a first-order litigation risk, personal liability for executives is being tested, and cross-border IP strategy requires jurisdiction-specific analysis rather than a unified framework.

Where things stand.

  • Human authorship is the settled floor for US copyright. Thaler v. Perlmutter cert denial closes the purely autonomous AI output question; the contested terrain is now the degree of human creative contribution required in hybrid workflows .
  • AI training data is the central copyright battleground. Active suits against Meta (publishers), Anthropic (Bartz settlement + ongoing fair use litigation), and others in the AI copyright tracker frame the same core question: does large-scale ingestion of copyrighted works for model training qualify as fair use (→ Five Major Publishers Sue Meta for Using Pirated Books to Train Llama AI, Anthropic argues Claude's copyright use is transformative fair use in CA court).
  • State-level digital replica and synthetic performer laws are creating immediate compliance obligations. New York's Fashion Workers Act and synthetic performer disclosure laws take effect June 19, 2026, requiring explicit consent and clear disclaimers; California's AB 2602/AB 1836 are already in force; a federal NO FAKES Act remains pending (→ New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026).
  • The USPTO has expanded protectable IP for AI-adjacent work in two directions. Design patent coverage now extends to projected, holographic, VR, and AR interfaces under March 2026 guidance; separately, the USPTO's pro-AI patent eligibility stance treats AI tools as analogous to laboratory equipment for inventorship purposes .
  • AI-generated materials are generally discoverable and not privileged. A series of federal decisions anchored by United States v. Heppner (S.D.N.Y. 2026) establishes that feeding privileged communications into third-party AI tools waives privilege over both the outputs and the underlying communications .
  • Platforms are building authentication infrastructure to separate human from AI-generated content. Spotify's "Verified by Spotify" badge excludes AI-persona profiles and pairs with Artist Profile Protection to address fraudulent AI-generated releases under established artists' names .
  • Facial trademark registration is emerging as a proactive AI-defense tool. UK IPO filings by Luke Littler and the precedent set by Cole Palmer's successful registration signal that celebrities and public figures are treating facial marks as standard protection against deepfakes and unauthorized AI replication (→ Luke Littler Seeks UK Trade Mark Registration for His Face).
  • Chinese AI video tools are generating cross-border copyright exposure. ByteDance's Seedance 2.0 and Kuaishou's Kling AI 2.0 produce Hollywood-quality video at scale; Hollywood organizations have raised copyright and likeness claims, but no US regulatory response has materialized .
  • AI patent filings are surging, creating a downstream PAE litigation risk. USPTO applications in AI have risen 33 percent since 2018 per WIPO data; foundational patents from failed AI startups are migrating to patent assertion entities that will target successful commercializers .
  • The UK has reversed course on its proposed AI copyright exception. The opt-out model is no longer the operative framework; what replaces it remains unsettled .

Latest developments.

  • Five major publishers file class-action against Meta and Zuckerberg personally in Manhattan federal court, alleging use of LibGen and Anna's Archive to train Llama and deliberate stripping of copyright-management information (→ Five Major Publishers Sue Meta for Using Pirated Books to Train Llama AI).
  • Stanford/Imperial College/Internet Archive study finds 35.3% of newly published websites are AI-generated or AI-assisted, with 17.6% fully AI-generated, confirming semantic contraction and a positivity shift in web content (→ Stanford study finds 35% of new websites AI-generated by May 2025).
  • New York's Fashion Workers Act and synthetic performer disclosure laws signed into law, taking effect June 19, 2026, with consent and disclaimer requirements for digital replicas in fashion and beauty advertising (→ New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026).
  • Devil Wears Prada 2 incident: human artist Alexis Franklin's manually created meme was widely misidentified as AI-generated, illustrating the reputational stakes of authorship misidentification in entertainment production .
  • Anthropic argues transformative fair use in California federal court, comparing Claude's training to human learning; Bartz v. Anthropic $1.5 billion settlement with 91% author participation moves toward fairness hearing (→ Anthropic argues Claude's copyright use is transformative fair use in CA court).
  • Anthropic issues 8,000+ DMCA takedowns after Claude Code source code leaked via npm .
  • Venable cross-border AI IP analysis documents three incompatible jurisdictional frameworks—China, UK/EU, and US—with no convergence on training data or agentic AI ownership (→ Venable Podcast Examines AI-IP Law Differences in China, UK, US).
  • Spotify launches "Verified by Spotify" badge excluding AI-persona profiles, paired with Artist Profile Protection beta and AI-involvement disclosure features .
  • Supreme Court denies certiorari in Thaler v. Perlmutter, settling the purely autonomous AI authorship question while leaving hybrid scenarios open .
  • USPTO March 2026 guidance extends design patent coverage to projected, holographic, VR, and AR interfaces, with retroactive application to pending applications .
  • Federal court decisions establish that AI-generated materials and prompts are generally discoverable; Heppner holds that feeding privileged communications into third-party AI tools waives privilege .
  • Luke Littler files UK IPO application to register his face as a trademark across gaming and entertainment categories, following Cole Palmer's successful registration in November 2025 (→ Luke Littler Seeks UK Trade Mark Registration for His Face).
  • ByteDance releases Seedance 2.0; Hollywood organizations raise copyright and likeness claims against Chinese AI video tools .
  • HarperCollins proceeds with AI-assisted YouTube series adapted from books despite author concerns .

Active questions and open splits.

  • How much human creative contribution is enough? Thaler settles the pure-AI end; the Copyright Office and courts have not established a clear threshold for hybrid human-AI works, leaving the most commercially significant question unresolved .
  • Does large-scale AI training on copyrighted works qualify as fair use? Anthropic's transformative-use argument in California, the Meta publishers' suit, and the Bartz settlement all turn on this question—and no appellate court has yet ruled on the merits (→ Anthropic argues Claude's copyright use is transformative fair use in CA court, Five Major Publishers Sue Meta for Using Pirated Books to Train Llama AI).
  • Can personal liability attach to executives for AI training decisions? The Meta complaint names Zuckerberg personally for allegedly authorizing use of pirated repositories and abandoning licensing negotiations—a theory that, if sustained, reshapes how AI companies document training data governance (→ Five Major Publishers Sue Meta for Using Pirated Books to Train Llama AI).
  • How will state digital replica laws interact with federal preemption? New York's June 2026 effective date, California's existing statutes, the pending NO FAKES Act, and the White House EO seeking federal preemption of conflicting state AI laws are on a collision course (→ New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026).
  • What is the privilege and work-product status of AI-assisted legal work? Heppner addresses client-side waiver, but the line between attorney-directed AI use and unprotected third-party platform use remains case-by-case; proposed Rule of Evidence 707 has not been adopted .
  • Who owns AI-generated code and agentic AI outputs? Autonomous coding agents generating production-ready software from specifications create ownership gaps—user, developer, or neither—that no court or regulator has yet resolved (→ Q1 2026 AI Agents Spark IP Debates in Software Development).
  • Can a photorealistic face function as a registered trademark? UK IPO precedent is moving toward yes, but the EU Grand Board of Appeal's pending decision on the Jan Smit case will set the standard for photorealistic facial marks across the EU—with direct implications for deepfake defense strategies globally (→ Luke Littler Seeks UK Trade Mark Registration for His Face).

What to watch.

  • The Bartz v. Anthropic fairness hearing—the settlement's approval or rejection will set the first major damages benchmark for AI copyright disputes and signal how courts will handle mass-author claims against AI developers.
  • Early motions practice in the Meta publishers' suit, particularly whether Zuckerberg's personal liability theory survives a motion to dismiss and what discovery on abandoned licensing negotiations produces.
  • The EU Grand Board of Appeal decision on photorealistic facial trademark standards, which will directly affect deepfake defense strategies for entertainment and sports clients operating in EU markets.
  • Whether the Trump administration's National AI Legislative Framework produces federal preemption language that displaces New York's and California's digital replica statutes before or after their enforcement dates.
  • USPTO public comment period closing May 12, 2026, on the PHVAR design patent guidance—any narrowing of the retroactive application will affect pending AR/VR portfolio strategies.
  • Whether Hollywood's copyright and likeness claims against Chinese AI video tools (Seedance 2.0, Kling AI 2.0) generate US regulatory or legislative responses, or whether the geopolitical dimension keeps them in a litigation-only track.

mail Subscribe to AI Generated Content IP email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap