arrow_back MSA + AI Provisions / Contracts / MSA-AI: How AI Changed Your MSA
Updated 2026-05-13 About
Current through May 13, 2026

MSA-AI: How AI Changed Your MSA

By Adam David Long

On this page chevron_right

MSA: AI Edition — At a Glance

At a Glance

What changedVendors are adding AI provisions through incorporated documents — AUPs, DPAs, AI addenda — not through MSA redlines. The MSA looks identical to last year. The terms governing AI features may not.
Who this affectsAny company using a SaaS vendor that has added AI features or an AI addendum since your last renewal. That is now most enterprise SaaS vendors.
The new document stackMSA → Order Form → DPA → AUP → AI Services Addendum (often added mid-term, incorporated by reference, updatable without notice)
Top 3 AI-specific risks1. AI outputs carved out of IP indemnification — the work your team publishes may have no coverage
2. Compliance burden shift — you are the EU AI Act "deployer" for a model you cannot audit
3. Training data clause — "improve the Services" may mean your data is training the vendor's model right now
What the standard MSA doesn't coverAI output quality in the SLA; AI-specific regulatory compliance obligations; model change notice; indemnification for AI-generated content your team modifies before use
The renewal trapAI addenda added mid-term carry forward at renewal. If you missed the AI addendum opt-out window, renewal locks in those terms for another year.
Where to startCheck whether your vendor's AI addendum is opt-in or opt-out for training data. Submit the opt-out request if it exists. Confirm receipt in writing. Do this before the next renewal.
Tags: msa · ai-provisions

MSA: How AI Changed Your MSA

How AI Changed Your MSA

The MSA you signed is a framework contract -- it sets the terms that govern everything that goes wrong across the life of a vendor relationship. Most in-house counsel have reviewed enough of them to know where the risk lives. AI changed that.

Vendors are not modifying the MSA itself to add AI provisions. They are updating the documents the MSA incorporates by reference: the Acceptable Use Policy, the Data Processing Addendum, a new AI Services Addendum. This means the MSA you negotiated last year may look identical today -- while the terms governing the AI features you are now actively using have been materially updated without your knowledge or consent.

This page maps the eight places where standard MSA patterns become significantly more dangerous when AI enters the picture.

If you're looking at a specific vendor agreement right now, start with the MSA Review Checklist ->. If you want to understand the full pattern behind any watchpoint, each one links to a dedicated explainer.

Tags: msa · ai-provisions

MSA: AI Provisions — The New Curveballs

The AI Provisions Changing Your MSA

Every major SaaS vendor has added or is adding AI-specific terms to their agreements. Most are not modifying the MSA itself -- they're updating incorporated documents (the Acceptable Use Policy, a new AI Addendum, or the Data Processing Addendum). This means you can renew an MSA that looks identical to last year's and inherit AI terms you've never negotiated.

Three patterns from the 37-pattern library are firing at accelerated rates in AI-specific provisions.

1. Template Contamination: "Standard" AI Terms That Aren't

The pattern: Bad templates propagate at scale. When a new contract type emerges (like an AI Services Addendum), there's no established market standard. Vendors draft terms optimized for their position and present them as "our standard AI addendum." Because nobody has seen enough of these to know what's normal, the template goes unchallenged.

How it shows up: Your company receives an AI Addendum from a vendor. It's the first one you've seen from this vendor. You can't benchmark it because you haven't seen enough AI addenda from other vendors to know what "standard" looks like -- and if your company is also a vendor that has rolled out its own AI features, your customers are in the same position reviewing your addendum. The vendor's sales team says "everyone signs this." Maybe they do. That doesn't make it balanced.

What to watch for:

  • Data usage clauses that grant the vendor rights to use customer data for model training, improvement, or benchmarking
  • Output ownership language that's ambiguous about who owns AI-generated deliverables
  • Broad "AI-generated content" disclaimers that may exclude the core product functionality from warranty coverage

The move: Ask the vendor for a redline showing what changed from their pre-AI terms. If the AI Addendum is new, ask which provisions are standard across their customer base and which are negotiable. Collect AI addenda from multiple vendors -- cross-vendor comparison is the fastest way to identify outliers.

2. Verification Impossibility: Warranties You Can't Check

The pattern: The vendor warrants something that neither party can practically verify. This pattern appears in 100% of cases alongside the Illusory Protection pattern -- when you can't verify the warranty, the remedy is structurally unreachable.

How it shows up in AI provisions:

"Vendor warrants that the AI Features will produce outputs that are commercially reasonable and materially consistent with the Documentation."

What does "commercially reasonable" mean for a probabilistic system? The model's outputs change with every update. The documentation describes capabilities at a point in time. The vendor can't warrant accuracy because the system is inherently non-deterministic. You can't verify the warranty because you can't see inside the model.

What to watch for:

  • Accuracy warranties on AI outputs (what's the benchmark? who measures?)
  • "Consistent with documentation" when the documentation is a marketing webpage that changes quarterly
  • Bias or fairness representations without defined metrics or testing protocols

The move: Replace vague AI warranties with measurable commitments: defined accuracy thresholds on specific use cases, with a testing protocol both parties agree to, and a remedy that triggers automatically when the threshold isn't met. If the vendor won't commit to measurables, the warranty is decorative.

3. Compliance Burden Shift: Their Black Box, Your Liability

The pattern: The regulatory compliance burden falls on the party that doesn't control the product. In AI provisions, this is the most commercially dangerous pattern because AI regulation is expanding faster than contract cycles.

How it shows up:

"Customer is responsible for ensuring that Customer's use of the AI Features complies with all applicable laws and regulations, including without limitation laws governing automated decision-making, data protection, and artificial intelligence."

The vendor built the black box. The vendor trained the model. The vendor chose the training data. But you're liable for the outputs. If the AI feature produces a biased hiring recommendation, a hallucinated legal citation, or a privacy violation -- that's your problem, not the vendor's.

What makes this urgent: The EU AI Act and a growing wave of state legislation across the U.S. are creating affirmative compliance obligations for "deployers" of AI systems -- the companies that use AI tools to make or support consequential decisions. That's your company when you're the customer. The compliance burden shift clause means you bear compliance risk for a system you can't audit, can't modify, and may not fully understand. And if you're also a vendor -- rolling out AI features and sending your own AI addendum to customers -- you're on the other side of this same clause: your addendum shifts that burden to your customers. For current status of state AI deployment laws, see the [LawSnap State AI Legislation Tracker] (coming soon).

What to watch for:

  • Unilateral compliance obligations on the customer for AI-specific regulations
  • Absence of vendor cooperation obligations (will the vendor provide documentation needed for your compliance assessment?)
  • No commitment to algorithmic impact assessments or bias testing
  • Indemnification exclusions for "AI-generated outputs" -- check whether the core product functionality now falls into this exclusion

The move: Add vendor cooperation obligations: the vendor must provide technical documentation sufficient for the customer's regulatory compliance assessment. Require notice of material model changes. Negotiate shared responsibility for AI-specific regulatory compliance -- the vendor controls the system, so pure customer-side liability is not commercially reasonable regardless of what the template says.

The Compound Risk

These three patterns don't operate in isolation. In practice, they compound:

  1. Template Contamination means the AI Addendum arrives as a take-it-or-leave-it document
  2. Verification Impossibility means you can't check whether the AI warranty is being met
  3. Compliance Burden Shift means when something goes wrong, it's your problem

The result: your company accepts AI terms it can't benchmark, can't verify, and can't defend against when regulators come calling. And if you're also a vendor sending AI addenda to your own customers, your counterparts are working through the same analysis in reverse. This is why the AI provisions are the most important section to negotiate in any MSA renewal happening right now -- not because they're the largest financial exposure, but because they're the least understood.

mail Subscribe to MSA-AI: How AI Changed Your MSA email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap