About
AI Enterprise Adoption

AI Enterprise Adoption

Tracking how enterprises - law firms, finance, tech, and regulated industries - are restructuring around AI: hiring, capital deployment, workforce friction, and the new operating models replacing legacy ones.

21 entries in In-House Counsel Tracker

Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models

Palantir CEO Alex Karp has publicly attacked low-quality AI outputs as "slop," positioning the company's AI Platform (AIP) as a secure, enterprise-grade alternative built on its Foundry data infrastructure. The criticism comes as Palantir faces investor concerns that it may lose market share to cheaper, faster standalone large language models from OpenAI and Anthropic—competitors that don't require Palantir's ontology-based data backbone.

Culture is where AI strategy goes to die. Here’s how to jump-start an AI-ready culture in 90 days

A 90-day cultural transformation framework has emerged as an alternative to mass workforce replacement during AI adoption, directly responding to IgniteTech CEO Eric Vaughan's controversial 2025 decision to terminate approximately 80% of his staff after employees resisted AI tools despite substantial training investment. Organizational researchers and business leaders have synthesized a three-phase approach—Diagnose, Rewire, Embed—designed to build AI-ready cultures without layoffs. The framework rests on a core finding: cultural misalignment, not technological incapacity, drives AI transformation failures. Writer's 2025 enterprise AI adoption report documents that nearly one-third of employees actively sabotage AI rollouts, with resistance particularly pronounced among technical staff and Gen Z workers (41% report active sabotage).

Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.

Unauthorized AI tools have become endemic in corporate environments, with nearly half of all workers admitting to using unapproved platforms like ChatGPT and Claude at work. A 2025 Gartner survey found that 69% of organizations either suspect or have confirmed that employees are using prohibited generative AI tools, while research indicates the figure reaches 98% when accounting for all unsanctioned applications. The problem spans organizational hierarchies: 93% of executives report using unauthorized AI, with 69% of C-suite members and 66% of senior vice presidents unconcerned about the practice. Gen Z employees lead adoption at 85%, and notably, 68% of workers using ChatGPT at work deliberately conceal it from employers.

AI Software Firms Shift from Per-User to Work-Based Pricing Models

Major AI software vendors are abandoning per-seat licensing in favor of consumption-based pricing tied to work output. Salesforce now charges for "agentic work units," while Workday bills based on "units of work" completed. OpenAI CEO Sam Altman has signaled the industry will shift toward "selling tokens"—the computational units underlying AI processing—positioning artificial intelligence as a utility priced like electricity or water.

Ex-Tesla HR Exec Advises Class of 2026 on Thriving Amid AI Job Disruption

A former Tesla HR executive who scaled the automaker's workforce to 100,000 delivered a commencement address to California State University, San Bernardino's Class of 2026 outlining a five-point strategy for competing in an AI-disrupted labor market. Valerie, who previously led talent acquisition at Handshake, urged graduates to view degrees as "navigational foundations" rather than job guarantees, to partner strategically with AI tools rather than resist them, to emphasize emotional intelligence over automatable tasks, to prioritize in-person networking, and to adopt "back-casting"—working backward from 12-month career goals to identify necessary moves. The speech directly counters narratives that higher education has become obsolete, instead positioning human judgment and contextual empathy as enduring competitive advantages.

Law Firms Urged to Educate Staff on AI Amid Client Pressures

Law firms are hemorrhaging money on artificial intelligence tools they don't understand and won't use, according to analysis published May 4, 2026, in Above the Law and Tech Law Crossroads. Firms facing client pressure to deploy AI are panic-buying software without first establishing internal competency—resulting in wasted spending, abandoned platforms, and disappointed clients. The core problem: decision-makers lack basic literacy on how AI actually works, what it can and cannot do, and which tools fit specific practice needs. The recommended fix is straightforward: mandatory education on AI fundamentals for lawyers, firm leadership, and business development staff before any vendor selection or client pitch.

Clio Report: 71% of Small Law Firms Use AI, But Revenue Growth Lags Larger Competitors

Clio's 2026 Legal Trends report exposes a widening performance gap between small law firms and their larger competitors despite widespread AI adoption. While 71% of solo practitioners and 75% of small firms now use AI tools, fewer than 33% have increased revenues—a sharp contrast to enterprise firms where nearly 60% report revenue growth tied to AI implementation.

Workhuman launches AI tool Future Leaders to predict promotions 3-5 years ahead

Workhuman unveiled its Future Leaders AI tool on April 28, 2026, designed to identify high-potential employees for senior leadership roles three to five years before promotion. The tool analyzes patterns from large leadership datasets to recommend overlooked talent and reverse-engineer promotion factors like "strategic trust," where employees receive valued responsibilities indicating future success. Testing on 2020 data showed approximately 80% accuracy in predicting promotions. CEO Eric Mosley announced the product at Workhuman's annual conference in Orlando, Florida, emphasizing its role as a complement to human judgment rather than a replacement.

Artisan's "Fire Steve, Hire Ava" NYC subway ad sparks AI backlash

Artisan, an AI sales software company, launched a subway advertisement campaign in New York City that directly pits human workers against artificial intelligence. The ad features "Steve," a human employee texting "not coming in today sry," alongside "Ava," an AI agent claiming to book 12 meetings and research 1,269 prospects. The tagline reads: "Fire Steve. Hire Ava." The advertisement appeared May 7, 2026, and quickly went viral on social media, drawing sharp criticism for explicitly promoting human replacement. CEO and co-founder Jaspar Carmichael-Jack defended the campaign in a blog post titled "Stop hiring humans," arguing that Artisan's agents target repetitive, low-level sales tasks unsuitable for human workers and should free people from drudgery.

Emanate launches AI agents for faster industrial materials quoting

Emanate, a San Francisco startup led by CEO Kiara Nirghin, has built AI agents designed to accelerate sales cycles in industrial materials—steel, aluminum, wire, pipe, and manufactured components. The platform automates quote generation, compressing timelines from 3-4 weeks to near-instant responses by connecting to customer ERP systems, historical sales data, emails, and PDFs. Implementation requires 8-12 weeks per customer to identify data sources and establish secure integrations, with ongoing refinement afterward. The company measures success on client revenue growth targets of 40% or higher, not merely cost reduction.

SimplePractice CLO Uses AI Exercise to Combat Employee Resistance

Ali Hartley, Chief Legal Officer at SimplePractice, ran a 30-minute team exercise where employees used AI tools to design a cafe menu. The exercise was designed to shift her team's perception of AI from skepticism and fear to viewing it as a creative tool for innovation. The team included people with varying technical backgrounds—former software developers alongside employees with no prior ChatGPT experience.

AI Disrupts Law Firm Billable Hour Model, Boosting Efficiency

Legal AI tools are reshaping law firm economics. Document review, drafting, and research are now 60–70% faster, with individual attorneys expected to save 190–240 billable hours annually. Thomson Reuters' 2025 Future of Professionals Report quantifies this as $20–32 billion in time savings across the U.S. market. Major clients—Meta, Zscaler, UBS—are already demanding "AI discounts" and refusing to pay for work automatable by machine. The pressure is immediate and client-driven.

New Microsoft study: Leaders, not workers, are responsible for successful AI integration

Microsoft's Work Trends Index, based on surveys of 20,000 AI users across 10 countries and trillions of anonymized productivity signals, found that organizational factors—culture, manager support, and strategic alignment—have twice the impact of individual employee factors on successful AI integration. The research shows 58% of AI users are producing work they couldn't create a year ago, but that figure rises to 80% in organizations that have redesigned their operating models around AI.

Anthropic Forms $1.5B Joint Venture With Blackstone, Goldman, HF To Sell AI Services

Anthropic is launching a $1.5 billion joint venture with Blackstone, Hellman & Friedman, and Goldman Sachs to build an AI consulting and implementation firm targeting enterprise clients. The four founding partners are each committing capital—Anthropic, Blackstone, and Hellman & Friedman at roughly $300 million each, with Goldman Sachs contributing approximately $150 million—while a consortium of major asset managers including Apollo Global Management, General Atlantic, Leonard Green, GIC, and Sequoia Capital provide the remainder. The unnamed venture will embed Anthropic's Claude AI models directly into portfolio companies, develop standardized transformation playbooks, and integrate AI agents into existing business workflows.

Falcon Rappaport & Berkman Opens Newark AI-Native Law Office

Falcon Rappaport & Berkman has opened a dedicated Newark office at 3 Gateway Center designed as an AI-native incubator for the firm. The office will develop agentic AI tools to enhance client and attorney services across all practice areas, operating as the operational hub for the firm's artificial intelligence capabilities.

Fast Company op-ed blames corporate culture for AI rollout failures

Tanya Moore, Chief People Officer at West Monroe, argues that enterprise AI adoption is failing at scale despite massive investment. With $37 billion spent on AI in 2025, most deployments stall due to low adoption rates, flat productivity gains, and absent returns—not because the technology doesn't work, but because companies treat AI as an IT implementation rather than a workforce transformation. The core problem: organizations automate broken processes instead of redesigning them, rely on one-time training without building internal champions, and skip the continuous learning cultures that enable experimentation.

Wall Street Sell-Off Divides Software Stocks into AI Winners and Losers

Wall Street triggered a sharp sell-off in software stocks last week, driven by investor fears that AI tools—particularly agentic systems and code generation—will disrupt traditional licensing models and reduce demand for seats. The market rotation hit horizontal application software hardest while rewarding companies demonstrating AI-driven revenue. The underlying demand: evidence that hyperscaler AI capital expenditure, exceeding $470 billion this year, translates to actual returns. Software firms are now being sorted into two categories: those adapting to enterprise AI needs and those at risk of obsolescence.

When enterprise AI finally works, it won’t look like AI

Enterprise organizations are abandoning the chatbot-first approach that dominated 2024-2025 in favor of embedded AI systems designed directly into operational workflows. Rather than prompt-based interfaces layered onto existing processes, leading companies—including those studied by McKinsey, Deloitte, and Microsoft—are fundamentally redesigning business operations around persistent, governed AI infrastructure. This represents a shift from "tools you use" to "systems your company becomes," where intelligence operates invisibly within core workflows instead of as a visible user-facing application. Anthropic and IBM are formalizing this architectural approach through guidance on context engineering and runtime governance, prioritizing auditability and constraint management over raw model capability.

Palantir raises 2026 revenue forecast to $7.2B on strong US demand

Palantir Technologies raised its full-year 2026 revenue guidance to $7.182–$7.198 billion, projecting 61% year-over-year growth. The upgrade follows fourth-quarter 2025 results that showed 70% overall revenue growth, with US commercial revenue climbing over 115% to a projected $3.144 billion and adjusted operating income of $4.126–$4.142 billion. The US government segment, Palantir's traditional anchor, has maintained consistent strength across consecutive quarters.

StrongSuit CEO Warns of AI Automation Risks in High-Stakes Litigation

Justin McCallon, CEO of StrongSuit, published commentary on Law360 arguing that AI-driven automation will reshape legal work—but only if it clears a uniquely high bar. Unlike most industries, litigation tolerates near-zero error rates. McCallon positioned StrongSuit's platform, which automates legal research, drafting, and document review, as engineered specifically for this constraint rather than general-purpose AI capability.

LawSnap Briefing Updated May 11, 2026

State of play.

  • AI vendor pricing is restructuring enterprise contracts in real time. Salesforce, Workday, and OpenAI are abandoning per-seat licensing for consumption-based models tied to work output — "agentic work units," "units of work," tokens — with measurement methodologies still largely undefined across the sector (→ AI Software Firms Shift from Per-User to Work-Based Pricing Models).
  • Palantir's integrated data-plus-AI thesis is under direct competitive pressure. CEO Alex Karp is publicly attacking commodity AI outputs as "slop" while investors question whether enterprises will pay Palantir's premium over cheaper standalone LLMs — even as the company raises full-year guidance to $7.2B on 61% projected growth (→ Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models, Palantir raises 2026 revenue forecast to $7.2B on strong US demand).
  • The enterprise AI architectural shift is accelerating from chatbot-first to embedded infrastructure. McKinsey, Deloitte, and Microsoft research documents that organizations redesigning core processes around persistent, governed AI — rather than bolting tools onto legacy workflows — are the ones achieving scale; Anthropic and IBM are formalizing this through context engineering and runtime governance guidance (→ When enterprise AI finally works, it won’t look like AI).
  • Shadow AI adoption is endemic and governance frameworks are lagging. A 2025 Gartner survey found 69% of organizations suspect or have confirmed unsanctioned AI use; the figure reaches 98% when counting all applications; 93% of executives report using unauthorized AI themselves (→ Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.).
  • For counsel advising enterprise clients, law firms, or AI vendors, the practical baseline is: consumption-based pricing is arriving before contract terms are standardized, embedded AI infrastructure creates audit-trail and accountability structures that existing governance frameworks do not yet address, and the Palantir debate crystallizes the build-vs.-buy and vendor-lock-in questions clients will be asking in the next procurement cycle.

Where things stand.

Latest developments.

Active questions and open splits.

  • Embedded AI governance: audit trails, decision lineage, and human oversight. The shift from user-facing chatbots to persistent, invisible infrastructure changes every assumption in existing AI governance frameworks — who is accountable when an embedded system makes an autonomous decision, what constitutes an adequate audit trail, and whether current compliance frameworks map onto systems that operate without a visible human-in-the-loop are all open (→ When enterprise AI finally works, it won’t look like AI).
  • Consumption-based pricing: measurement and cost-cap terms. The shift from per-seat to work-output billing is moving faster than contract standards. How "agentic work units" and "units of work" are defined, audited, and capped is unresolved — and vendors are setting terms before enterprise procurement teams have frameworks to push back (→ AI Software Firms Shift from Per-User to Work-Based Pricing Models).
  • Integrated data-plus-AI vs. commodity LLM: the Palantir question. Whether compliance-first, ontology-based platforms justify premium pricing over faster, cheaper generic LLM deployments is the question clients in regulated industries will be asking procurement and outside counsel. If the premium erodes, existing Palantir contracts face renegotiation pressure; if regulators tighten AI governance, Palantir's positioning becomes a competitive advantage (→ Palantir CEO Karp slams AI "slop" amid fears of losing business to rival models, Palantir raises 2026 revenue forecast to $7.2B on strong US demand).
  • Shadow AI governance: block, monitor, or channel. The data makes blocking unrealistic — 98% penetration including C-suite. Channeling requires governance infrastructure most organizations have not built, and one-third of employees are already sharing enterprise data through unsanctioned tools. Whether deliberate misuse constitutes a compliance failure or an employment-performance issue is unsettled (→ Unintentional AI Adoption Is Already Inside Your Company. The Only Question Is Whether You Know It.).
  • Law firm billing model under AI pressure. The performance paradox — firms capturing productivity gains while leaving pricing models unchanged — is documented across Am Law 100 and small-firm cohorts alike. Whether client demands for AI-efficiency discounts will force structural fee-arrangement changes, and whether firms that raise rates without demonstrating AI value face client attrition or malpractice exposure, is the open question for firm management (→ Clio Report: 71% of Small Law Firms Use AI, But Revenue Growth Lags Larger Competitors, AI Legal Ops Study Shows 14-Hour Weekly Savings Per Lawyer).
  • Leadership accountability for AI outcomes. Microsoft's research frames AI failure as a leadership problem, not a technology problem. Whether boards and executives face fiduciary or duty-of-care exposure for AI adoption failures — particularly where governance frameworks were not embedded at pilot inception — is an emerging question without settled doctrine (→ New Microsoft study: Leaders, not workers, are responsible for successful AI integration).
  • Sector-specific agent deployment: liability allocation as AI moves autonomous. Emanate's model — AI-generated quotes initially under human review, transitioning to fully autonomous operation as client trust builds — is the pattern across industrial AI deployments. The contractual and liability questions around that transition point, and who bears responsibility when autonomous outputs are wrong, are not yet standardized (→ Emanate launches AI agents for faster industrial materials quoting).

What to watch.

  • Whether Anthropic's joint venture with Blackstone and Goldman Sachs discloses governance terms, liability allocation, and Claude deployment contracts — these will become reference points for the next wave of AI-lab/enterprise deals.
  • Whether Palantir customer churn accelerates over the next two quarters as enterprises evaluate commodity LLM alternatives — and whether any renegotiation or migration disputes surface publicly.
  • Whether consumption-based pricing disputes surface in litigation or arbitration as enterprises discover that "agentic work unit" definitions were not adequately defined at contracting.
  • Whether Anthropic's or IBM's context engineering and runtime governance guidance for embedded AI systems becomes a market-standard reference point for enterprise AI governance frameworks — and whether regulators adopt or reference it.
  • Whether additional major law firms follow Mayer Brown's mandatory AI training model or Goodwin's AI-native target — and whether bar associations begin issuing competency guidance that references specific adoption thresholds.
  • Whether any organization publishes a shadow-AI governance framework that becomes peer-standard — the current vacuum remains the most immediate compliance gap across the cluster.

mail Subscribe to AI Enterprise Adoption email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap