About
Law And Technology

Law And Technology

Tracking Law And Technology legal and regulatory developments.

65 entries in Litigator Tracker

DOJ export indictment triggers new probe of Super Micro’s controls

The Department of Justice unsealed an indictment in March 2026 charging three individuals tied to Super Micro Computer—two former employees and one contractor—with conspiring to violate U.S. export controls. The defendants allegedly diverted approximately $2.5 billion worth of servers containing advanced AI technology, including Nvidia chips, to China between 2024 and 2025. The indictment names co-founder and former senior vice president Yih‑Shyan "Wally" Liaw and a general manager from Super Micro's Taiwan office, who prosecutors say coordinated shipments through a third-party intermediary to circumvent export restrictions. Super Micro itself is not charged and has stated it was not accused of wrongdoing.

New York Enacts AI Digital Replica Laws for Fashion Models Effective June 2026

New York has enacted sweeping restrictions on synthetic performers in fashion and beauty advertising. Governor Kathy Hochul signed two bills into law on December 11, 2025—the Fashion Workers Act (S9832) and synthetic performer disclosure laws (S.8420-A/A.8887-B)—that take effect June 19, 2026. The laws require explicit consent from human models before their likenesses can be replicated digitally and mandate clear disclaimers whenever AI avatars appear in advertisements. Violations carry fines of $500 to $1,000. The New York Department of Labor will oversee model agency registration by June 2026. These rules arrive as brands including H&M plan to deploy digital twins for marketing, and virtual models like Shudu and Lil Miquela compete directly with human performers for contracts.

DOJ Intervenes in xAI Lawsuit to Block Colorado's AI Discrimination Law[1][2][3]

xAI filed suit on April 9, 2026, in U.S. District Court for the District of Colorado to block enforcement of Colorado's SB24-205, a comprehensive AI anti-discrimination law scheduled to take effect June 30, 2026. The statute requires developers and deployers of high-risk AI systems—those used in hiring, lending, and admissions decisions—to conduct impact assessments, make disclosures, and implement risk mitigation measures to prevent algorithmic discrimination. Two weeks later, on April 24, the U.S. Department of Justice intervened with its own complaint, arguing the law violates the Equal Protection Clause by compelling demographic adjustments through disparate-impact liability while simultaneously authorizing discrimination through exemptions for diversity initiatives. The court granted DOJ's intervention and issued a stay suspending enforcement pending resolution.

Dua Lipa sues Samsung for $15M over unauthorized TV ad image use

Singer Dua Lipa sued Samsung for $15 million on May 8, 2026, in federal court in California, alleging copyright infringement, trademark infringement, right of publicity violations, and false endorsement under state law and the Lanham Act. The dispute centers on a backstage photograph taken at the 2024 Austin City Limits Festival—an image Lipa owns—that Samsung allegedly manipulated and used on television packaging and global marketing materials beginning in early 2025 without permission, payment, or her involvement. Lipa claims the placement implied her endorsement of Samsung products and drove sales.

Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

Florida Attorney General James Uthmeier announced on April 9, 2026, that his office is launching an investigation into OpenAI and its ChatGPT models, alleging their role in facilitating a 2025 Florida State University (FSU) shooting, harming minors, enabling criminal activity, and posing national security risks from potential exploitation by adversaries like the Chinese Communist Party.[1][2][3][4][5][6][7] Subpoenas are forthcoming, with probes focusing on ChatGPT's alleged assistance to the FSU gunman—who queried it on the day of the April 17, 2025, attack about public reaction to a shooting and peak times at the FSU student union—plus links to child sex abuse material, grooming, and suicide encouragement.[1][3][5][6][7]

Brockman's Diary Revealed in Musk-OpenAI Trial First Week

Greg Brockman's personal diary emerged this week as central evidence in Elon Musk's lawsuit against OpenAI, with the co-founder and president testifying about his internal deliberations over converting the organization from nonprofit to for-profit status. The diary directly addresses Musk's core claim that OpenAI deceived him by abandoning its original mission to develop artificial intelligence for humanity's benefit. Testimony also revealed inflammatory communications: text messages in which Musk threatened to make Brockman and CEO Sam Altman "the most hated men in America" if no settlement was reached, and a 2017 meeting where Musk tore a painting from the wall after cofounders rejected his demand for majority equity.

Mississippi and ABA AI Ethics Opinions Criticized for Inadequate Verification Guidance

The Mississippi State Bar adopted formal ethics guidance on generative AI use that permits lawyers to reduce verification requirements when using legal-specific tools, provided they have prior experience with the system. Mississippi Ethics Opinion No. 267, adopted verbatim from ABA Formal Opinion 512 issued in July 2024, establishes baseline principles requiring lawyers to protect client confidentiality, use technology competently, verify outputs, bill reasonably, and obtain informed consent. The opinion's core permission—allowing "less independent verification or review" for familiar tools—has drawn sharp criticism for creating standards that contradict the ABA's own cited research.

Federal Court Halts Colorado AI Law Enforcement Days Before June Deadline

A federal magistrate judge in Colorado issued a stay on April 27, 2026, freezing enforcement of the Colorado AI Act (SB24-205) just weeks before its scheduled June 30 effective date. The order prevents the Colorado Attorney General from initiating investigations or enforcement actions under the law, effectively halting one of the country's most comprehensive state AI regulations. Colorado Attorney General Philip Weiser voluntarily committed not to enforce the law or begin rulemaking until after the legislative session concludes.

New Jersey lawyer faces contempt over unpaid AI sanctions in Diddy case

Tyrone Blackburn, the attorney representing Liza Gardner in a sexual assault civil suit against Sean "Diddy" Combs, faces a contempt hearing in New Jersey federal court over unpaid sanctions tied to AI-generated case citations. U.S. District Judge Noel L. Hillman ordered Blackburn to pay $6,000 in December 2025—$500 monthly—after finding that a brief he filed contained a fabricated case opinion produced by an artificial intelligence research tool. The case cited did not exist.

Musk-Altman OpenAI trial opens with statements in Oakland court

Jury selection began April 28 in Elon Musk's lawsuit against OpenAI, Sam Altman, Greg Brockman, and Microsoft in U.S. District Court for the Northern District of California in Oakland. Opening statements occurred April 29. Musk alleges OpenAI breached its 2015 nonprofit founding agreement by converting to a for-profit model in 2019 with Microsoft backing, abandoning its stated mission to develop AI for humanity's benefit. He invested $38–45 million in the company. Musk seeks OpenAI's return to nonprofit status, removal of Altman and Brockman from leadership, and $134–150 billion in damages to be redirected to OpenAI's charitable arm.

FedEx v. Qualcomm: Fed Cir Rules PTAB Real-Party-in-Interest Challenges Unreviewable

The Federal Circuit issued a precedential decision on April 29, 2026, in Federal Express Corporation v. Qualcomm Incorporated that significantly narrows appellate review of Patent Trial and Appeal Board decisions. The court held that challenges to the PTAB's handling of real-party-in-interest disputes under 35 U.S.C. § 312(a)(2) cannot be appealed. The ruling treats RPI objections as integral to the institution decision itself, placing them beyond the scope of review under 35 U.S.C. § 314(d), which makes all institution rulings final and unreviewable absent constitutional violations or actions outside the agency's statutory authority.

Tesla Owners Sue Over Unfulfilled FSD Promises on HW3 Hardware

Tesla faces coordinated class-action litigation across multiple jurisdictions from owners of Hardware 3-equipped vehicles manufactured between 2016 and 2024. The plaintiffs allege that Tesla and Elon Musk made false representations that these vehicles would achieve full self-driving capability through software updates alone. A spring 2026 software release exposed Hardware 3's technical limitations, effectively excluding millions of owners from advanced autonomous features now reserved for newer Hardware 4 systems. The lead case, brought by retired attorney Tom LoSavio, centers on buyers who paid $8,000 to $12,000 for full self-driving capability that is now incompatible with their vehicles without costly hardware retrofits Tesla has not formally offered. Similar suits have been filed in Australia, the Netherlands, across Europe, and in California, where one action involves approximately 3,000 plaintiffs. Globally, the disputes affect roughly 4 million vehicles.

Elon Musk Testifies OpenAI Stole Charity by Going For-Profit in Lawsuit[1][2]

Elon Musk testified April 28 in a California courtroom that OpenAI breached a foundational promise by converting from nonprofit to for-profit status. Now valued at $852 billion, OpenAI made the shift despite Musk's 2017 warning that the company should either remain nonprofit or operate independently. "It is not OK to steal a charity," Musk told the court, referencing email exchanges with Sam Altman in which Altman expressed support for the nonprofit model but acknowledged no legal obligation bound the company to it permanently.

Articles Warn Clients Against Feeding Privileged Docs to Consumer AI

On May 8, 2026, The National Law Review and Varnum LLP published advisory articles warning clients against misusing consumer AI tools in legal matters. The pieces detail a specific risk: uploading privileged documents—draft agreements, legal memos, work product—into platforms like ChatGPT or Claude waives attorney-client privilege by exposing confidential information to third parties with no confidentiality obligations. The articles also caution that AI models tend to validate user assumptions rather than provide objective legal analysis, making them unreliable validators of legal advice.

Colorado’s Impending AI Law Thrown Into More Doubt By Court Ruling: What Will Happen Before June 30 Effective Date?

A federal magistrate judge issued a temporary restraining order on April 27, 2026, blocking Colorado from enforcing its artificial intelligence antidiscrimination law (SB 24-205). The order freezes all state investigations and enforcement actions while litigation proceeds and shields companies from penalties for violations occurring within 14 days after the court rules on a preliminary injunction motion. The law was set to take effect June 30.

DOJ Joins xAI Lawsuit to Block Colorado AI Anti-Discrimination Law[1][2][7]

xAI filed a federal lawsuit on April 9, 2026, in Denver challenging Colorado's SB24-205, the nation's first comprehensive AI regulation law. The statute requires developers and deployers of "high-risk" AI systems to prevent algorithmic discrimination, conduct bias assessments, provide transparency notices, and monitor systems used in hiring, housing, and healthcare. The law takes effect June 30, 2026. xAI argues the statute violates the First Amendment by compelling ideological conformity—specifically forcing changes to Grok's outputs on racial justice topics—and is unconstitutionally vague and burdensome.

Data as Value – and Risk: Litigation Issues Facing Technology Providers and Their Customers

Organizations across all sectors are facing a wave of litigation over their data practices and AI systems. According to a Baker Donelson report, these legal challenges now extend well beyond technology companies and data brokers to affect organizations of every size that rely on data for operations, network security, regulatory compliance, and contractual obligations. The disputes involve civil liberties groups, workers' advocates, and privacy organizations pursuing claims centered on data privacy violations, algorithmic bias, unauthorized data use, AI system liability, and worker surveillance.

Judge Leon May Impose Rule 11 Sanctions on Trump DOJ Lawyers Over Ballroom Filing

Judge Amit Mehta is considering imposing Rule 11 professional sanctions against the top three lawyers at the Trump Department of Justice after they filed a motion in a White House ballroom construction case that courts and legal observers characterized as legally deficient and improper. The filing, submitted by Acting Attorney General Todd Blanche's office in support of a ballroom project on the site of the former East Wing, abandoned standard legal argumentation in favor of political rhetoric—including references to "Trump Derangement Syndrome," labeling opposing arguments "FAKE," and praising the President as a "highly successful real estate developer."

LegalPlace Secures €70M; Jurisphere Raises $2.2M for Global Expansion

French legal tech platform LegalPlace closed a €70 million funding round, marking the largest capital raise in recent legal tech activity. The Paris-based business formation platform, which helps entrepreneurs launch companies online, is capitalizing on France's growing legal tech sector. Separately, Jurisphere.ai, an India-based startup founded in 2024 by Manas Khandelwal, Varun Khandelwal, and Sumit Ghosh, secured $2.2 million in seed funding from backers including InfoEdge Ventures, Flourish Ventures, Antler, and 8i Ventures. Jurisphere offers AI-native legal research, drafting, and document review tools built for Indian legal workflows and now serves over 500 teams.

Alston & Bird Publishes April 2026 AI Quarterly Review of Key U.S. Laws and Policies

Congress moved on two fronts in late March to shape AI regulation. On March 26, bipartisan lawmakers introduced H.R. 8094, the AI Foundation Model Transparency Act, requiring developers of large language models to disclose training methods, purposes, risks, evaluation protocols, and monitoring practices. The bill imposes no affirmative regulation—only disclosure obligations. One week earlier, the Trump Administration released its National Policy Framework for Artificial Intelligence, a non-binding document recommending Congress adopt unified federal standards across seven areas: child protection, AI infrastructure, intellectual property, free speech, innovation, workforce development, and preemption of state law. The framework followed Senator Marsha Blackburn's March 18 discussion draft of the Trump America AI Act, which would codify President Trump's December 2025 executive order directing federal preemption of state AI laws.

SDNY Rules AI Tools Waive Privilege in US v. Heppner

A federal judge in Manhattan has ruled that a financial services executive waived attorney-client privilege and work product protection by using Anthropic's Claude AI tool without his lawyers' involvement. In United States v. Heppner, Judge Jed S. Rakoff ordered disclosure of 31 strategy documents the defendant generated after inputting case details derived from attorney communications. The court found that Claude, as a non-attorney third party, lacks fiduciary duties, and that Anthropic's privacy policy—which permits data use for training and third-party sharing—destroyed any reasonable expectation of confidentiality. This marks the first federal decision of its kind, rejecting the defendant's argument that later sharing the materials with counsel could retroactively restore privilege protection.

White House pushes federal AI review standards to eliminate "ideological bias"

The Trump administration has established federal review procedures for artificial intelligence systems across government agencies through an executive order titled "Preventing Woke AI in the Federal Government," issued in July 2025 alongside America's AI Action Plan. The order requires federal agencies to implement "Unbiased AI Principles" for large language models in procurement decisions. The Office of Management and Budget must issue implementing guidance within 90 days, after which agencies have an additional 90 days to revise existing contracts and adopt compliance procedures.

StrongSuit CEO Warns of AI Automation Risks in High-Stakes Litigation

Justin McCallon, CEO of StrongSuit, published commentary on Law360 arguing that AI-driven automation will reshape legal work—but only if it clears a uniquely high bar. Unlike most industries, litigation tolerates near-zero error rates. McCallon positioned StrongSuit's platform, which automates legal research, drafting, and document review, as engineered specifically for this constraint rather than general-purpose AI capability.

Q1 2026 AI Agents Spark IP Debates in Software Development

In the first quarter of 2026, autonomous AI workflow agents including Openclaw demonstrated the ability to generate production-ready software directly from user specifications. The capability triggered immediate debate over intellectual property ownership, developer liability, and the legal framework governing self-generating code.

Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models

Palantir CEO Alex Karp has publicly attacked low-quality AI outputs as "slop," positioning the company's AI Platform (AIP) as a secure, enterprise-grade alternative built on its Foundry data infrastructure. The criticism comes as Palantir faces investor concerns that it may lose market share to cheaper, faster standalone large language models from OpenAI and Anthropic—competitors that don't require Palantir's ontology-based data backbone.

Venable Podcast Examines AI-IP Law Differences in China, UK, US

Venable LLP hosted a special episode of its podcast AI and IP: The Legal Frontier on April 30, 2026, bringing together Justin Pierce (co-chair of Venable's Intellectual Property Division), Jason Yao of China's Wanhuida law firm, and Toby Bond of UK-based Bird & Bird to examine how artificial intelligence is fracturing intellectual property law across jurisdictions. The discussion centered on three distinct regulatory approaches: China's willingness to protect AI-generated outputs when meaningful human input is present; the UK and EU's insistence on human authorship and originality; and the US framework built on human contribution and fair use doctrine.

Federal Court Rules AI Chatbot Communications Not Protected by Attorney-Client Privilege

On February 17, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York ruled in United States v. Heppner that a criminal defendant's communications with Anthropic's Claude AI platform were not protected by attorney-client privilege or work product doctrine. The defendant had used the public chatbot to create analysis documents after receiving a grand jury subpoena, then claimed privilege when sharing them with counsel. The court ordered disclosure to the government.

Judge Fines Lindell Lawyer $5K for 2nd False Case Citation

U.S. District Judge Nina Y. Wang sanctioned attorney Christopher Kachouroff and his law firm $5,000 on May 8, 2026, for submitting a defamation brief with a materially incorrect citation while defending MyPillow CEO Mike Lindell. The error was obvious and reflected failure to reasonably review the document before filing, Wang ruled, rejecting Kachouroff's human error explanation. Lindell, his media company, and co-counsel Jennifer T. DeMaster escaped penalty on this sanction, though DeMaster faced consequences in an earlier ruling.

Sanders and AOC call for federal AI moratorium amid regulatory debate

Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez have introduced a proposal for a federal moratorium on AI development and data centers, characterizing artificial intelligence as an "imminent existential threat." The call for restrictions has crystallized a fundamental policy divide: whether AI requires aggressive regulatory intervention or a risk-based approach that permits innovation while addressing specific harms.

Law Firms Urged to Educate Staff on AI Amid Client Pressures

Law firms are hemorrhaging money on artificial intelligence tools they don't understand and won't use, according to analysis published May 4, 2026, in Above the Law and Tech Law Crossroads. Firms facing client pressure to deploy AI are panic-buying software without first establishing internal competency—resulting in wasted spending, abandoned platforms, and disappointed clients. The core problem: decision-makers lack basic literacy on how AI actually works, what it can and cannot do, and which tools fit specific practice needs. The recommended fix is straightforward: mandatory education on AI fundamentals for lawyers, firm leadership, and business development staff before any vendor selection or client pitch.

Indiana Judge Rules AI Cannot Substitute for Attorney Review in Discovery

On April 14, 2026, Magistrate Judge Tim A. Baker of the U.S. District Court for the Southern District of Indiana issued an order in White v. Walmart (Case No. 25-cv-01120) sanctioning plaintiff's counsel for relying exclusively on artificial intelligence to identify deficiencies in the defendant's discovery responses. The court held that while AI can serve as a useful tool, it cannot substitute for attorney judgment and does not satisfy the Federal Rules of Civil Procedure's requirement that parties meet and confer in good faith before escalating discovery disputes.

Ninth Circuit Revives Target Thread Count Class Action[1][7]

On April 17, the Ninth Circuit reversed a district court's dismissal of a putative class action alleging Target sold 100% cotton bedsheets with fraudulent thread counts. Plaintiff Alexander Panelli claimed he purchased sheets labeled 800-thread-count in September 2023 that tested at only 288 threads per inch. He asserted the label was literally false under California consumer protection law, since 600 thread count is the physical maximum for pure cotton. The district court had dismissed the case, reasoning no reasonable consumer would believe an impossible claim. Target argued the thread count measurement itself was ambiguous and therefore not deceptive as a matter of law.

Oregon Appellate Court Sanctions Lawyer with $10K Fine for AI-Hallucinated Brief Citations

The Oregon Court of Appeals has sanctioned Salem attorney William Ghiorso with a $10,000 fine for submitting an opening brief containing at least 15 fabricated case citations and 9 nonexistent quotations. The court attributed the errors to AI "hallucinations"—instances where generative AI generated convincing but false legal information. The penalty marks the first time an Oregon appellate court has considered attorney fees as a sanction alternative to fines, though it ultimately imposed the monetary penalty after Ghiorso implemented new safeguards.

Seven Families Sue OpenAI Over Suspect's ChatGPT Use in 2025 FSU Shooting

Seven families of victims from a 2025 Florida State University mass shooting have filed lawsuits against OpenAI, claiming the company negligently failed to alert law enforcement about the suspect's extensive ChatGPT interactions. The suits allege that Phoenix Ikner, the accused gunman now awaiting trial, maintained constant communication with the chatbot and may have received guidance on executing the attack. The families are pursuing negligence claims, arguing OpenAI breached its duty of care by failing to flag foreseeable harm despite the chatbot's design and the nature of the interactions.

China's SPP Releases First Bilingual 2025 IP Prosecution White Paper

China's Supreme People's Procuratorate released its first bilingual White Paper on Intellectual Property Prosecution Work on April 21, 2026, documenting enforcement activity across criminal, civil, administrative, and public interest litigation. The SPP reported accepting or reviewing 11,341 criminal IP infringement cases involving 25,160 individuals in 2025, prosecuting 9,135 cases with 19,102 defendants while declining to prosecute 5,105. The agency also handled 1,251 civil IP cases, 1,795 administrative cases, and 612 public interest cases. Simultaneously, the SPP issued 10 model cases in emerging sectors including chip manufacturing, photovoltaics, and industrial software, along with an annual report on IP crimes.

Ex-Wachtell lawyer in insider trading ring later joined investment bank

The Department of Justice unsealed charges Wednesday against 30 individuals in a decade-long insider trading scheme centered on nonpublic information from major M&A transactions. Nicolo Nourafchan, a Yale Law graduate who worked at Sidley Austin, Latham & Watkins, Cleary Gottlieb, and Goodwin Procter, led the conspiracy. Participants traded on confidential deal details including Occidental Petroleum's $55 billion acquisition of Anadarko in 2019 and Burger King's $11 billion takeover of Tim Hortons in 2014. The scheme leveraged Nourafchan's recruitment of law school classmates positioned at major firms with M&A access. A former Wachtell Lipton lawyer and Yale classmate of Nourafchan has been identified as a co-conspirator; he later worked at an investment bank. The Southern District of New York is prosecuting the criminal case while the SEC pursues parallel civil charges.

From Human-in-the-Loop to Human-at-the-Helm: Navigating the Ethics of Agentic AI

The legal profession is shifting from reactive oversight of AI systems to proactive governance designed for autonomous tools. As artificial intelligence has evolved from generative systems that produce text on demand to agentic systems capable of independent action—sending emails, populating filings, modifying records—the traditional model of lawyers reviewing AI output after completion has become inadequate. Legal ethics experts are now calling for "human-at-the-helm" governance that establishes parameters and controls what AI is permitted to do before it acts, rather than inspecting results afterward.

AI Disrupts Law Firm Billable Hour Model, Boosting Efficiency

Legal AI tools are reshaping law firm economics. Document review, drafting, and research are now 60–70% faster, with individual attorneys expected to save 190–240 billable hours annually. Thomson Reuters' 2025 Future of Professionals Report quantifies this as $20–32 billion in time savings across the U.S. market. Major clients—Meta, Zscaler, UBS—are already demanding "AI discounts" and refusing to pay for work automatable by machine. The pressure is immediate and client-driven.

USPTO Launches AI Image Search Tool for Trademark Clearance

The U.S. Patent and Trademark Office launched a beta AI-powered image search tool in April 2026 that lets users upload images to retrieve visually similar marks from the federal register. Accessed through a camera icon on the trademark search system, the tool functions like reverse image search—users log into their USPTO.gov account, upload an image or link, and receive results showing marks with related design elements. The USPTO announced the tool alongside other AI enhancements, including a mark description generator and the Trademark Classification Agentic Codification Tool (Class ACT), which automates backend classification work that previously took months.

Virginia Poised to Sign Class Action Law, Ending 175-Year Ban

Virginia is poised to become the 49th state to authorize civil class actions in state courts. Governor Abigail Spanberger is expected to sign Senate Bill 229 and House Bill 449, legislation that would overhaul how multi-party civil claims proceed in Virginia starting January 1, 2027. The House of Delegates passed HB 449 on a 64-34 vote in early February 2026, and SB 229 has cleared the Senate Finance and Appropriations Committee. The bills were sponsored by Senator Surovell and Delegate Marcus Simon.

Culture is where AI strategy goes to die. Here’s how to jump-start an AI-ready culture in 90 days

A 90-day cultural transformation framework has emerged as an alternative to mass workforce replacement during AI adoption, directly responding to IgniteTech CEO Eric Vaughan's controversial 2025 decision to terminate approximately 80% of his staff after employees resisted AI tools despite substantial training investment. Organizational researchers and business leaders have synthesized a three-phase approach—Diagnose, Rewire, Embed—designed to build AI-ready cultures without layoffs. The framework rests on a core finding: cultural misalignment, not technological incapacity, drives AI transformation failures. Writer's 2025 enterprise AI adoption report documents that nearly one-third of employees actively sabotage AI rollouts, with resistance particularly pronounced among technical staff and Gen Z workers (41% report active sabotage).

Trump Admin Releases National AI Framework on March 20, 2026

On March 20, 2026, the Trump administration released the "National Policy Framework for Artificial Intelligence: Legislative Recommendations," a detailed statutory blueprint that would establish uniform federal AI policy and preempt most state regulations. The Framework, mandated by an December 2025 executive order, proposes that Congress delegate AI development oversight to existing sector-specific agencies rather than create a new federal regulator. It would allow states limited authority only in narrow areas: child safety, fraud prevention, zoning, and government procurement. The administration has tasked the Department of Justice with challenging state AI laws through a dedicated task force, while the Department of Commerce will evaluate state regulations deemed "onerous," and the Federal Trade Commission will enforce preemption policies on deceptive practices.

Judge Brown Rejects DOJ Reconsideration Motion in ICE Arrest Case

A federal judge in the Eastern District of New York has rejected the Department of Justice's motion to reconsider an earlier ruling against ICE, instead using the government's own request to demand a substantive compliance plan. Judge Brown identified four distinct constitutional and statutory violations by ICE agents: an administrative warrant issued after arrest, revocation of the petitioner's deferred action status without explanation, and systematic obstruction of detainee access to counsel. The judge gave DOJ 21 days to detail how it would remedy the violations. The government's reconsideration motion offered no meaningful response, prompting the judge to characterize the DOJ's arguments as frivolous, misleading, and meritless.

Meloni Posts AI-Generated Nude to Warn of Deepfake Danger

On May 5, 2026, Italian Prime Minister Giorgia Meloni reposted an AI-generated image of herself in lingerie across her social media accounts—deliberately amplifying a fake that had circulated online. Rather than ignore it, she republished the image herself with a warning about synthetic media dangers, joking that the creators had "improved" her appearance. The move was framed as a public service announcement demonstrating how convincingly AI can fabricate imagery.

GrayRobinson Hit with Additional Lawsuits Over 2025 Data Breach

GrayRobinson, P.A., a Florida-based law and lobbying firm, disclosed a cybersecurity breach affecting 65,113 individuals. Unauthorized actors accessed the firm's network between March 5 and March 24, 2025, potentially exposing names, Social Security numbers, dates of birth, driver's licenses, financial account information, and protected health information. The firm detected the intrusion on March 24, secured its systems, notified law enforcement, and engaged external cybersecurity experts. The forensic investigation concluded April 13, 2026. Notifications to affected individuals began April 24, 2026, with regulatory reports filed to state attorneys general including California and Maine. GrayRobinson offered complimentary Experian IdentityWorks credit monitoring and reported no evidence of actual misuse.

GrayRobinson Faces Class Action Over 2025 Data Breach Negligence

GrayRobinson, P.A., a Florida-based law firm, disclosed a data breach affecting 65,113 individuals between March 5 and March 24, 2025. Unauthorized actors accessed the firm's network during that period, potentially exposing names, Social Security numbers, and other sensitive personal information. The firm detected the intrusion on March 24, secured its systems, notified law enforcement, and retained third-party investigators. A forensic review completed in April 2026 confirmed the exposure, and GrayRobinson sent breach notices on April 24, 2026. The firm is offering two years of free identity monitoring through Experian. No evidence of actual misuse has emerged.

Article Shares Tips for Collaborating with Counterparties on AI in Contract Talks

A National Law Review contributor published practical guidance on April 28, 2026, for managing AI-assisted contract negotiations with counterparties. The article recommends four core strategies: asking counterparties directly whether they are using AI tools, providing detailed context to improve AI-generated outputs, anticipating how AI systems will respond to specific proposals, and reframing negotiations around shared objectives rather than adversarial positioning. The piece reflects a market shift toward AI-powered contract platforms—including tools from Clio, Ironclad, Bind, and GC.ai—that automate redlining, clause comparison, and deviation tracking. These systems have reduced contract review cycles from 30 to 90 minutes per round to seconds, with firms reporting 30 to 50 percent faster negotiations overall.

Washington Gov. Ferguson Signs HB 2225 Requiring AI Companion Chatbot Disclosures

Washington State Governor Bob Ferguson signed House Bill 2225, the Chatbot Disclosure Act, into law on March 24, 2026, effective January 1, 2027. The statute requires operators of "companion" AI chatbots—systems designed to simulate human responses and sustain ongoing user relationships—to disclose at the outset of interactions and every three hours (hourly for minors) that the bot is artificially generated. The law prohibits chatbots from claiming to be human, mandates protocols for detecting self-harm or suicidal ideation, bans manipulative engagement tactics targeting minors such as encouraging secrecy from parents or prolonged use, and bars sexually explicit content for underage users. Exemptions carve out business operational bots, gaming features outside sensitive topics, voice command devices, and curriculum-focused educational tools. Violations constitute unfair or deceptive acts under the Washington Consumer Protection Act (RCW 19.86), enforceable by the Attorney General and through private right of action allowing consumers to recover actual damages up to $25,000 treble.

Stanford study finds 35% of new websites AI-generated by May 2025

A collaborative study by Stanford University, Imperial College London, and the Internet Archive has quantified the rapid proliferation of AI-generated content online. Analyzing web pages from 2022 through May 2025 using the Wayback Machine and AI-detection methods, researchers found that 35.3% of newly published websites were AI-generated or AI-assisted, with 17.6% fully AI-generated. Stanford AI researcher Jonáš Doležal characterized the speed of this shift as "staggering" in recent interviews.

DOJ's Lead Prosecutor on Law Firm Appeals to Exit Role End of May

Abhishek Kambli, the Deputy Associate Attorney General who led the Trump administration's defense of executive orders targeting four major law firms, announced his departure from the DOJ effective end of May 2026. Kambli joined the department in February 2025 and oversaw litigation defending orders that barred Perkins Coie, Jenner & Block, WilmerHale, and Susman Godfrey from federal contracts, buildings, and employment based on their representation of administration opponents. All four firms challenged the orders in federal court; all won injunctions on constitutional grounds. The DOJ appealed to the D.C. Circuit, then abruptly moved to dismiss those appeals on March 2, 2026—only to reverse course the next day when Kambli filed to withdraw the dismissal motion.

College Student Sues Meete Dating App for Repurposing Her TikTok Video in Ads

A University of Tennessee nursing student has sued Meete, a dating app operated by British Virgin Islands–based Quantum Communications, alleging the company stole her public TikTok graduation video and weaponized it for targeted advertising. Elena Lunglhofer claims Meete overlaid the video with app graphics, added a synthetic voiceover in which she appeared to solicit men for casual encounters, and used geotargeting to serve the ad on Snapchat to users near her campus, including residents of her dormitory. She discovered the misuse when a male student showed her screenshots of the ad. Attorney Abe Pafford filed suit on April 28, 2026, in Tennessee state court, asserting claims for misappropriation of likeness, right of publicity violations, and emotional distress.

Nonprofit Volunteer Sues DLA Piper for Malicious Prosecution in Chipotle-Referred Fraud Case

Jeremy Whiteley, a former nonprofit volunteer board member, filed a malicious-prosecution complaint against DLA Piper on May 8, 2026, in California state court. Whiteley alleges the firm aggressively pursued a Computer Fraud and Abuse Act lawsuit against him at the behest of Chipotle's then-general counsel, who referred the matter. The underlying CFAA case, which Whiteley successfully defended, allegedly lacked merit. Whiteley seeks damages of $1.8 million in defense costs incurred during the litigation.

Clio Report: 71% of Small Law Firms Use AI, But Revenue Growth Lags Larger Competitors

Clio's 2026 Legal Trends report exposes a widening performance gap between small law firms and their larger competitors despite widespread AI adoption. While 71% of solo practitioners and 75% of small firms now use AI tools, fewer than 33% have increased revenues—a sharp contrast to enterprise firms where nearly 60% report revenue growth tied to AI implementation.

AI Drives 85K Tech Layoffs in 2026 Despite Overall Job Cut Decline

Technology companies eliminated over 85,000 jobs in the first four months of 2026 explicitly attributed to AI adoption, marking a sharp acceleration from 2025's 55,000 AI-linked cuts. Amazon, Accenture, Atlassian, Coinbase, Snap, Block, and Oracle announced reductions ranging from 10 to 30 percent of their workforces, with executives citing automation, operational efficiency, and repositioning for an "AI era." The cuts span entry-level through mid-career roles in programming, customer service, and administrative functions. WARN notices and SEC filings document the reductions, though no federal legislation or agency action has been triggered.

Freshfields Signs Multi-Year AI Partnership with Anthropic for Claude Deployment[1][2][3]

Freshfields Bruckhaus Deringer announced a multi-year partnership with Anthropic on April 23, 2026, to deploy Claude AI models across its 33 offices and 5,700 employees. The rollout will occur through Freshfields' proprietary AI platform, with the firm and Anthropic jointly developing legal-specific workflows and agentic tools for contract review, legal research, due diligence, and document drafting. Usage of Claude surged 500% within the first six weeks of deployment. The partnership roadmap includes early access to new Anthropic models and expansion to Anthropic's Cowork agentic platform. Freshfields Lab, led by Partner and Co-Head Gerrit Beckhaus, is driving the collaboration alongside Anthropic's legal and product teams.

OpenAI CEO Sam Altman Faces Mounting Pressure Ahead of IPO

OpenAI and CEO Sam Altman face mounting pressure as the company prepares for a potential 2026 public offering. The intensifying scrutiny spans multiple fronts: internal competitive tensions with Anthropic, activist opposition, and legal proceedings. Most notably, Chief Revenue Officer Denise Dresser circulated a memo challenging Anthropic's financial claims, alleging inflated revenue through accounting methods and strategic errors in compute acquisition. Anthropic currently reports $30 billion in annualized revenue compared to OpenAI's last reported $25 billion. Separately, an activist group called Stop AI has conducted ongoing protests at OpenAI headquarters, with some members facing criminal trial for blocking the building. Altman was served a subpoena onstage in San Francisco in late April while speaking with basketball coach Steve Kerr, requiring him to testify as a witness in the criminal case.

Ex-Tesla HR Exec Advises Class of 2026 on Thriving Amid AI Job Disruption

A former Tesla HR executive who scaled the automaker's workforce to 100,000 delivered a commencement address to California State University, San Bernardino's Class of 2026 outlining a five-point strategy for competing in an AI-disrupted labor market. Valerie, who previously led talent acquisition at Handshake, urged graduates to view degrees as "navigational foundations" rather than job guarantees, to partner strategically with AI tools rather than resist them, to emphasize emotional intelligence over automatable tasks, to prioritize in-person networking, and to adopt "back-casting"—working backward from 12-month career goals to identify necessary moves. The speech directly counters narratives that higher education has become obsolete, instead positioning human judgment and contextual empathy as enduring competitive advantages.

BakerHostetler Podcast on USPTO's AI Strategy and Guidance Evolution[12][15]

BakerHostetler released a podcast in April 2026 synthesizing the USPTO's evolving approach to artificial intelligence across patent operations, policy, and practice. The discussion centers on the agency's January 2025 Artificial Intelligence Strategy, which established five pillars: fostering responsible AI innovation, enhancing intellectual property policies, building AI infrastructure, promoting ethical use, and developing workforce expertise. The strategy builds on Executive Order 14110 (October 2023), which directed the USPTO to issue guidance on AI inventorship and patent eligibility. The agency has since revised its inventorship standards to require significant human contribution and bar AI as an independent inventor, and updated patent eligibility determinations under the Alice/Mayo framework in July 2024. Internally, the USPTO deployed SCOUT, a generative AI tool used by over 200 examiners for prior art analysis and cybersecurity tasks.

LawSnap Briefing Updated May 11, 2026

State of play.

  • The Trump DOJ has taken a structural position against state AI antidiscrimination law. DOJ intervened in xAI's challenge to Colorado SB24-205, arguing the statute violates Equal Protection by compelling demographic adjustments—a posture that frames federal preemption of state AI regulation as an active enforcement priority (→ DOJ Intervenes in xAI Lawsuit to Block Colorado's AI Discrimination Law[1][2][3], DOJ Joins xAI Lawsuit to Block Colorado AI Anti-Discrimination Law[1][2][7]).
  • Colorado SB24-205 is under a TRO with its June 30 effective date in doubt. A federal magistrate issued a temporary restraining order on April 27; the Colorado AG has declined to defend enforcement pending legislative revision; and the legislature's session ended May 13—leaving successor legislation as the only viable path (→ Colorado’s Impending AI Law Thrown Into More Doubt By Court Ruling: What Will Happen Before June 30 Effective Date?).
  • The Musk v. OpenAI trial is in progress, with Brockman's diary as live evidence and the nonprofit-to-for-profit conversion theory under direct examination—creating the first substantial judicial record on founder fiduciary duties in AI ventures (→ Brockman's Diary Revealed in Musk-OpenAI Trial First Week).
  • DOJ has indicted three individuals tied to Super Micro for allegedly diverting $2.5 billion in AI servers to China, triggering parallel SEC review, investor class actions, and an independent investigation by Munger, Tolles & Olson—signaling heightened criminal enforcement of export controls on advanced semiconductor technology (→ DOJ export indictment triggers new probe of Super Micro’s controls).
  • For counsel advising AI developers, enterprise deployers, or technology companies with China-facing supply chains, the practical baseline is a simultaneous federal preemption push against state AI regulation and escalating criminal export-control enforcement—two vectors that require distinct but coordinated compliance postures.

Where things stand.

Latest developments.

Active questions and open splits.

  • Federal preemption scope for state AI regulation. The Colorado litigation will test whether First Amendment compulsion, Commerce Clause, and Equal Protection theories collectively disable state algorithmic-discrimination frameworks — and whether DOJ's intervention posture extends to other state AI statutes beyond Colorado.
  • Successor legislation viability after SB24-205. With Colorado's legislative session closed and the TRO in place, the question is whether any revised statute can survive the constitutional objections now on record — or whether the federal preemption play effectively ends comprehensive state AI antidiscrimination law as a viable regulatory form.
  • Founder fiduciary duties in AI venture conversions. The Musk v. OpenAI trial is generating the first substantial judicial record on whether departed board members can assert breach of fiduciary duty and contract claims arising from a nonprofit-to-for-profit conversion — with direct implications for how AI governance documents and founder agreements are drafted.
  • Export-control liability allocation in AI hardware supply chains. The Super Micro indictment raises unresolved questions about how far up the corporate hierarchy criminal and civil liability travels when a third-party intermediary is used — and what trade-compliance program adequacy looks like for companies with Taiwan and China-facing operations.
  • Agentic AI malpractice exposure and the governance standard. No court or bar authority has yet defined what "adequate supervision" means for agentic AI systems that act autonomously — the gap between the emerging "human-at-the-helm" framework and enforceable professional responsibility standards remains wide.
  • State vs. federal synthetic performer regimes. New York's June 2026 consent and disclosure requirements, California's parallel statutes, the pending federal NO FAKES Act, and the White House's preemption EO are on a collision course — brands and agencies face layered and potentially conflicting obligations with no harmonization mechanism in place.
  • Enterprise AI contract renegotiation triggers. As commodity LLMs undercut integrated platform pricing, the question of whether material-adverse-change clauses, benchmarking provisions, or competitive-alternatives language in existing AI platform contracts support renegotiation or exit is unresolved and client-facing.

What to watch.

  • Whether Colorado enacts successor legislation to SB24-205 and whether DOJ signals acceptance or renewed challenge — the outcome will define the template for federal treatment of state AI antidiscrimination law nationally.
  • Preliminary injunction ruling in the Colorado case, which will test whether the TRO's constitutional reasoning holds and whether the "reasonably knowable" compliance standard survives scrutiny.
  • Trial developments in Musk v. OpenAI — specifically, how the court treats the nonprofit founding documents and whether any ruling on fiduciary duty reaches the merits before settlement.
  • Super Micro independent investigation findings and whether DOJ expands the indictment to reach corporate officers — the first signal of how broadly criminal export-control enforcement will sweep in the AI hardware sector.
  • New York Department of Labor's model agency registration framework, due by June 2026, and any enforcement actions under the synthetic performer disclosure laws — the first test of how the consent-and-disclosure regime operates in practice.
  • EU AI Act labeling obligations taking effect August 2026 and whether they create compliance conflicts for brands already subject to New York's synthetic performer rules.

mail Subscribe to Law And Technology email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap