About
AI Enterprise Adoption

AI Enterprise Adoption

Tracking how enterprises - law firms, finance, tech, and regulated industries - are restructuring around AI: hiring, capital deployment, workforce friction, and the new operating models replacing legacy ones.

20 entries in Tech Counsel Tracker

Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models

Palantir CEO Alex Karp has publicly attacked low-quality AI outputs as "slop," positioning the company's AI Platform (AIP) as a secure, enterprise-grade alternative built on its Foundry data infrastructure. The criticism comes as Palantir faces investor concerns that it may lose market share to cheaper, faster standalone large language models from OpenAI and Anthropic—competitors that don't require Palantir's ontology-based data backbone.

Google and OpenAI Compete in Agentic Commerce via UCP and ACP Protocols

OpenAI's Instant Checkout feature, launched in September 2025 through a partnership with Shopify and Stripe, quietly shut down in March 2026 after failing to gain merchant adoption. The service, built on the Agentic Commerce Protocol (ACP), enabled direct purchases within ChatGPT but supported only a limited merchant base—fewer than 30 Shopify stores went live alongside platforms like Etsy and Glossier. The core problem: the protocol lacked flexibility for complex checkout scenarios involving loyalty programs, promotional codes, and real-time inventory management. OpenAI's pivot to merchant-led checkout infrastructure marked a significant retreat from its initial vision of seamless in-chat commerce.

Law Firms Urged to Educate Staff on AI Amid Client Pressures

Law firms are hemorrhaging money on artificial intelligence tools they don't understand and won't use, according to analysis published May 4, 2026, in Above the Law and Tech Law Crossroads. Firms facing client pressure to deploy AI are panic-buying software without first establishing internal competency—resulting in wasted spending, abandoned platforms, and disappointed clients. The core problem: decision-makers lack basic literacy on how AI actually works, what it can and cannot do, and which tools fit specific practice needs. The recommended fix is straightforward: mandatory education on AI fundamentals for lawyers, firm leadership, and business development staff before any vendor selection or client pitch.

Emanate launches AI agents for faster industrial materials quoting

Emanate, a San Francisco startup led by CEO Kiara Nirghin, has built AI agents designed to accelerate sales cycles in industrial materials—steel, aluminum, wire, pipe, and manufactured components. The platform automates quote generation, compressing timelines from 3-4 weeks to near-instant responses by connecting to customer ERP systems, historical sales data, emails, and PDFs. Implementation requires 8-12 weeks per customer to identify data sources and establish secure integrations, with ongoing refinement afterward. The company measures success on client revenue growth targets of 40% or higher, not merely cost reduction.

AI Disrupts Law Firm Billable Hour Model, Boosting Efficiency

Legal AI tools are reshaping law firm economics. Document review, drafting, and research are now 60–70% faster, with individual attorneys expected to save 190–240 billable hours annually. Thomson Reuters' 2025 Future of Professionals Report quantifies this as $20–32 billion in time savings across the U.S. market. Major clients—Meta, Zscaler, UBS—are already demanding "AI discounts" and refusing to pay for work automatable by machine. The pressure is immediate and client-driven.

Clio Report: 71% of Small Law Firms Use AI, But Revenue Growth Lags Larger Competitors

Clio's 2026 Legal Trends report exposes a widening performance gap between small law firms and their larger competitors despite widespread AI adoption. While 71% of solo practitioners and 75% of small firms now use AI tools, fewer than 33% have increased revenues—a sharp contrast to enterprise firms where nearly 60% report revenue growth tied to AI implementation.

Falcon Rappaport & Berkman Opens Newark AI-Native Law Office

Falcon Rappaport & Berkman has opened a dedicated Newark office at 3 Gateway Center designed as an AI-native incubator for the firm. The office will develop agentic AI tools to enhance client and attorney services across all practice areas, operating as the operational hub for the firm's artificial intelligence capabilities.

When enterprise AI finally works, it won’t look like AI

Enterprise organizations are abandoning the chatbot-first approach that dominated 2024-2025 in favor of embedded AI systems designed directly into operational workflows. Rather than prompt-based interfaces layered onto existing processes, leading companies—including those studied by McKinsey, Deloitte, and Microsoft—are fundamentally redesigning business operations around persistent, governed AI infrastructure. This represents a shift from "tools you use" to "systems your company becomes," where intelligence operates invisibly within core workflows instead of as a visible user-facing application. Anthropic and IBM are formalizing this architectural approach through guidance on context engineering and runtime governance, prioritizing auditability and constraint management over raw model capability.

AI Software Firms Shift from Per-User to Work-Based Pricing Models

Major AI software vendors are abandoning per-seat licensing in favor of consumption-based pricing tied to work output. Salesforce now charges for "agentic work units," while Workday bills based on "units of work" completed. OpenAI CEO Sam Altman has signaled the industry will shift toward "selling tokens"—the computational units underlying AI processing—positioning artificial intelligence as a utility priced like electricity or water.

Palantir raises 2026 revenue forecast to $7.2B on strong US demand

Palantir Technologies raised its full-year 2026 revenue guidance to $7.182–$7.198 billion, projecting 61% year-over-year growth. The upgrade follows fourth-quarter 2025 results that showed 70% overall revenue growth, with US commercial revenue climbing over 115% to a projected $3.144 billion and adjusted operating income of $4.126–$4.142 billion. The US government segment, Palantir's traditional anchor, has maintained consistent strength across consecutive quarters.

Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.

Unauthorized AI tools have become endemic in corporate environments, with nearly half of all workers admitting to using unapproved platforms like ChatGPT and Claude at work. A 2025 Gartner survey found that 69% of organizations either suspect or have confirmed that employees are using prohibited generative AI tools, while research indicates the figure reaches 98% when accounting for all unsanctioned applications. The problem spans organizational hierarchies: 93% of executives report using unauthorized AI, with 69% of C-suite members and 66% of senior vice presidents unconcerned about the practice. Gen Z employees lead adoption at 85%, and notably, 68% of workers using ChatGPT at work deliberately conceal it from employers.

Deloitte CEO Reveals <30% of Enterprise AI Pilots Scale Successfully

Deloitte's latest research on enterprise AI deployment reveals a persistent scaling crisis: companies launch AI pilots at scale but operationalize fewer than 30 percent of them. MIT's NANDA initiative, drawing from 150 interviews, a 350-person survey, and analysis of 300 public deployments, found that 95 percent of generative AI pilots fail to deliver measurable financial returns or revenue acceleration. Other studies report similar outcomes—IDC data shows an 88 percent failure rate, with only 4 of every 33 proofs-of-concept reaching production. The gap is stark: enterprises are investing $30 billion to $40 billion annually in AI initiatives, yet the vast majority yield minimal returns because pilots succeed in controlled demonstrations but collapse when deployed into real workflows.

Enterprise AI Architectures Pose Escalating Security Risks

Enterprise organizations are deploying AI systems atop legacy architectures fundamentally incompatible with autonomous workloads, creating widespread security vulnerabilities. In April 2026, cloud platform Vercel disclosed a breach in which attackers stole customer data through an architectural gap rather than a software flaw. A Vercel employee had granted full-access permissions to a third-party AI productivity tool using their corporate Google account. When that tool's systems were compromised, attackers exploited the trust relationship to access Vercel's internal environment and steal a database later listed for sale on hacker forums for $2 million. The incident illustrates how inadequate identity and access controls become dangerous when autonomous AI agents operate with excessive privileges.

StrongSuit CEO Warns of AI Automation Risks in High-Stakes Litigation

Justin McCallon, CEO of StrongSuit, published commentary on Law360 arguing that AI-driven automation will reshape legal work—but only if it clears a uniquely high bar. Unlike most industries, litigation tolerates near-zero error rates. McCallon positioned StrongSuit's platform, which automates legal research, drafting, and document review, as engineered specifically for this constraint rather than general-purpose AI capability.

Wall Street Sell-Off Divides Software Stocks into AI Winners and Losers

Wall Street triggered a sharp sell-off in software stocks last week, driven by investor fears that AI tools—particularly agentic systems and code generation—will disrupt traditional licensing models and reduce demand for seats. The market rotation hit horizontal application software hardest while rewarding companies demonstrating AI-driven revenue. The underlying demand: evidence that hyperscaler AI capital expenditure, exceeding $470 billion this year, translates to actual returns. Software firms are now being sorted into two categories: those adapting to enterprise AI needs and those at risk of obsolescence.

1Password CTO Nancy Wang Outlines Dual AI Strategy: Risk Mitigation and Agent Security

1Password's Chief Technology Officer Nancy Wang has outlined the company's strategy for securing AI systems within enterprise environments, focusing on the unique risks that autonomous agents pose to credential management. The approach centers on three mechanisms: deploying on-device agents to monitor and flag risky AI model usage among developers, establishing deterministic authorization frameworks for AI agents, and creating security benchmarks designed specifically for autonomous systems. 1Password is executing this strategy in partnership with Anthropic and OpenAI, and has announced integrations with developer tools including Cursor, GitHub, and Vercel.

LawSnap Briefing Updated May 11, 2026

State of play.

  • AI vendor pricing is restructuring enterprise contracts in real time. Salesforce, Workday, and OpenAI are abandoning per-seat licensing for consumption-based models tied to work output — "agentic work units," "units of work," tokens — with measurement methodologies still largely undefined across the sector (→ AI Software Firms Shift from Per-User to Work-Based Pricing Models).
  • Palantir's integrated data-plus-AI thesis is under direct competitive pressure. CEO Alex Karp is publicly attacking commodity AI outputs as "slop" while investors question whether enterprises will pay Palantir's premium over cheaper standalone LLMs — even as the company raises full-year guidance to $7.2B on 61% projected growth (→ Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models, Palantir raises 2026 revenue forecast to $7.2B on strong US demand).
  • The enterprise AI architectural shift is accelerating from chatbot-first to embedded infrastructure. McKinsey, Deloitte, and Microsoft research documents that organizations redesigning core processes around persistent, governed AI — rather than bolting tools onto legacy workflows — are the ones achieving scale; Anthropic and IBM are formalizing this through context engineering and runtime governance guidance (→ When enterprise AI finally works, it won’t look like AI).
  • Shadow AI adoption is endemic and governance frameworks are lagging. A 2025 Gartner survey found 69% of organizations suspect or have confirmed unsanctioned AI use; the figure reaches 98% when counting all applications; 93% of executives report using unauthorized AI themselves (→ Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.).
  • For counsel advising enterprise clients, law firms, or AI vendors, the practical baseline is: consumption-based pricing is arriving before contract terms are standardized, embedded AI infrastructure creates audit-trail and accountability structures that existing governance frameworks do not yet address, and the Palantir debate crystallizes the build-vs.-buy and vendor-lock-in questions clients will be asking in the next procurement cycle.

Where things stand.

Latest developments.

Active questions and open splits.

  • Embedded AI governance: audit trails, decision lineage, and human oversight. The shift from user-facing chatbots to persistent, invisible infrastructure changes every assumption in existing AI governance frameworks — who is accountable when an embedded system makes an autonomous decision, what constitutes an adequate audit trail, and whether current compliance frameworks map onto systems that operate without a visible human-in-the-loop are all open (→ When enterprise AI finally works, it won’t look like AI).
  • Consumption-based pricing: measurement and cost-cap terms. The shift from per-seat to work-output billing is moving faster than contract standards. How "agentic work units" and "units of work" are defined, audited, and capped is unresolved — and vendors are setting terms before enterprise procurement teams have frameworks to push back (→ AI Software Firms Shift from Per-User to Work-Based Pricing Models).
  • Integrated data-plus-AI vs. commodity LLM: the Palantir question. Whether compliance-first, ontology-based platforms justify premium pricing over faster, cheaper generic LLM deployments is the question clients in regulated industries will be asking procurement and outside counsel. If the premium erodes, existing Palantir contracts face renegotiation pressure; if regulators tighten AI governance, Palantir's positioning becomes a competitive advantage (→ Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models, Palantir raises 2026 revenue forecast to $7.2B on strong US demand).
  • Shadow AI governance: block, monitor, or channel. The data makes blocking unrealistic — 98% penetration including C-suite. Channeling requires governance infrastructure most organizations have not built, and one-third of employees are already sharing enterprise data through unsanctioned tools. Whether deliberate misuse constitutes a compliance failure or an employment-performance issue is unsettled (→ Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.).
  • Law firm billing model under AI pressure. The performance paradox — firms capturing productivity gains while leaving pricing models unchanged — is documented across Am Law 100 and small-firm cohorts alike. Whether client demands for AI-efficiency discounts will force structural fee-arrangement changes, and whether firms that raise rates without demonstrating AI value face client attrition or malpractice exposure, is the open question for firm management (→ Clio Report: 71% of Small Law Firms Use AI, But Revenue Growth Lags Larger Competitors, AI Legal Ops Study Shows 14-Hour Weekly Savings Per Lawyer).
  • Leadership accountability for AI outcomes. Microsoft's research frames AI failure as a leadership problem, not a technology problem. Whether boards and executives face fiduciary or duty-of-care exposure for AI adoption failures — particularly where governance frameworks were not embedded at pilot inception — is an emerging question without settled doctrine (→ New Microsoft study: Leaders, not workers, are responsible for successful AI integration).
  • Sector-specific agent deployment: liability allocation as AI moves autonomous. Emanate's model — AI-generated quotes initially under human review, transitioning to fully autonomous operation as client trust builds — is the pattern across industrial AI deployments. The contractual and liability questions around that transition point, and who bears responsibility when autonomous outputs are wrong, are not yet standardized (→ Emanate launches AI agents for faster industrial materials quoting).

What to watch.

  • Whether Anthropic's joint venture with Blackstone and Goldman Sachs discloses governance terms, liability allocation, and Claude deployment contracts — these will become reference points for the next wave of AI-lab/enterprise deals.
  • Whether Palantir customer churn accelerates over the next two quarters as enterprises evaluate commodity LLM alternatives — and whether any renegotiation or migration disputes surface publicly.
  • Whether consumption-based pricing disputes surface in litigation or arbitration as enterprises discover that "agentic work unit" definitions were not adequately defined at contracting.
  • Whether Anthropic's or IBM's context engineering and runtime governance guidance for embedded AI systems becomes a market-standard reference point for enterprise AI governance frameworks — and whether regulators adopt or reference it.
  • Whether additional major law firms follow Mayer Brown's mandatory AI training model or Goodwin's AI-native target — and whether bar associations begin issuing competency guidance that references specific adoption thresholds.
  • Whether any organization publishes a shadow-AI governance framework that becomes peer-standard — the current vacuum remains the most immediate compliance gap across the cluster.

mail Subscribe to AI Enterprise Adoption email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap