If you run customer operations for a mid-to-large enterprise, you have spent years buying software on a per-seat model. You pay a monthly or annual fee per agent license. The vendor's revenue grows when you hire more agents. Your contract renews based on headcount. The whole model assumes that humans are the unit of delivery.
AI-native customer service vendors charge differently. They charge per resolution — per successfully completed interaction. No seat. No license. You pay when the AI solves a customer's problem, and you don't pay when it doesn't.
This pricing shift sounds simple. It isn't. It changes your budget model, your risk allocation, your vendor relationship, and your definition of "success" in ways that operations leaders need to understand before they're sitting across from a vendor in a contract negotiation. This post is a practical walk-through of how per-resolution pricing works, where the traps are, and how to think about it if you're considering an AI customer service deployment.
What "resolution" actually means
The first thing to nail down in any per-resolution contract is the definition of a resolved interaction. This is where most of the commercial risk lives, and vendors with weak products will try to define it broadly.
A resolution is not: a conversation that ended. A resolution is not: a customer who didn't immediately complain again. A resolution is not: a ticket that was closed.
A resolution, properly defined, is: a customer interaction in which the customer's stated problem was fully addressed, without requiring escalation to a human agent, and without the customer re-contacting about the same issue within a defined window (typically 24-72 hours).
That last clause — the re-contact window — is the most important. It's what prevents a vendor from gaming the metric by closing every conversation and calling it resolved. If a customer's problem wasn't actually solved, they call back. If the contract doesn't include a re-contact window, the vendor gets paid regardless.
Push for these specifics:
- Resolution definition in writing, with the re-contact window explicitly stated
- Exclusions for escalations: any conversation that required human escalation should not count as a billable resolution
- Partial resolution handling: if a customer contacts about three issues and two are resolved, what's billable?
- Audit rights: you should be able to sample resolved conversations independently to verify they meet the definition
The budget model shift
Per-seat pricing is predictable. You know your headcount, you know your per-seat price, you can forecast your annual spend to within a few percent.
Per-resolution pricing is variable. Your spend scales with volume. In a bad month — a product recall, a billing error, a service outage — your contact volume spikes and your AI vendor bill spikes with it. In a quiet month, your bill drops.
This variability is not inherently bad. It means your AI costs track your business reality rather than your headcount decisions. But it requires a different budgeting approach:
Volume forecasting becomes a finance input. If you can forecast contact volume reasonably accurately, you can model your AI spend with similar accuracy. Build your per-resolution pricing into your volume model, not your headcount model.
Set a volume cap in your contract. Most AI customer service vendors will accept a monthly volume cap — a ceiling above which the per-resolution rate steps down, or above which you have the right to pause new resolutions to a human queue. This protects you from a spike event turning into an invoice surprise.
Negotiate tiered pricing. Per-resolution price should drop as volume increases. Push for explicit tiers: 0-5,000 resolutions at rate A, 5,001-20,000 at rate B, 20,001+ at rate C. This is standard practice in other consumption-based software categories.
"The right mental model for per-resolution pricing isn't 'am I paying less per ticket than I pay my agents?' It's 'am I paying for outcomes, and are those outcomes actually happening?' Those are different questions."
The automation rate question
Per-resolution pricing only works if the AI actually resolves a meaningful percentage of your incoming volume. If your AI is routing and failing, escalating constantly, and barely touching the actual conversation work, you're paying per-resolution prices for a small slice of contacts while your human team still handles most of the volume.
Before you commit to a per-resolution model, you need honest answers about expected automation rate — the percentage of contacts the AI will handle end-to-end without human escalation.
Vendors will quote automation rates from their best-performing deployments. These are almost never representative of what you'll see in your first six months. A realistic ramp looks like:
- Months 1-3: Training and integration. The AI is learning your policies, your product, your customer base. Automation rates of 30-50% are realistic for a well-scoped initial deployment.
- Months 4-8: Improvement loop. The AI is learning from escalations, from customer satisfaction signals, from your team's feedback. Automation rates typically reach 60-75% for standard contact types.
- Months 9-18: Steady state. For a mature deployment with well-defined resolution criteria, 80%+ is achievable for the contact types you've explicitly trained for.
These ranges vary enormously by industry and contact type. Simple, rule-based interactions — "what's my order status?", "what are your opening hours?", "how do I reset my password?" — automate at very high rates quickly. Complex, policy-heavy interactions — "I want to cancel and get a refund for three months of service" — take longer and have lower ultimate automation rates.
Structure your contract so the per-resolution fees scale appropriately with the actual automation rate you're achieving, and include mutual review points (every quarter is reasonable) where you can renegotiate the rate or the scope based on actual performance.
Where vendors try to hide costs
Per-resolution pricing is often presented as the only cost. It isn't. Watch for these:
Implementation fees. Some vendors charge significant upfront implementation fees — $50,000 to $200,000 or more for enterprise deployments — to cover API integration, knowledge base ingestion, and initial training. This is legitimate work that costs real money, but it needs to be in your total cost of ownership model. Push back on implementation fees that aren't capped, and make sure they're tied to defined deliverables.
Knowledge base maintenance fees. Your AI needs up-to-date information about your products, policies, and procedures to give accurate answers. Keeping that knowledge base current is work — either the vendor charges for it, or you own the process. Understand which model you're buying. If you're owning it, budget internal time accordingly.
Human escalation handling. When the AI can't resolve an interaction, it escalates. What happens at escalation? Some vendors charge a lower per-escalation fee; some include escalations in the resolution fee (which creates an incentive to over-escalate); some hand off to your human team with no additional charge. The clean model is escalations included at zero cost — the vendor's incentive should be to resolve, not to escalate.
Training data fees. As your deployment matures, improving automation rates requires retraining the model on new interaction patterns. Some vendors charge for retraining cycles; others include them. Know what you're getting.
Integration fees for new channels. If you launch on WhatsApp today and want to add voice next year, you'll probably pay for the integration. Get a sense of what channel expansions cost before you're in the middle of a roadmap conversation.
Aligning incentives: what good per-resolution pricing should do
The core promise of per-resolution pricing is incentive alignment: the vendor only makes money when the AI actually helps your customers. This is true, but the alignment is only as strong as your resolution definition.
A vendor with a loose resolution definition — one that counts every closed conversation as resolved — has misaligned incentives. They make money on interactions that didn't actually help anyone.
A vendor with a tight resolution definition — one that requires confirmed customer satisfaction and no re-contact within 48 hours — has genuinely aligned incentives. They make money when your customers are actually helped. This is what you're looking for.
The tell is how the vendor reacts when you push on the re-contact window and the escalation exclusion. A confident vendor with a strong product will accept tight definitions. A vendor with a weaker product will resist them, offer soft language about "reasonable resolution," and try to keep the definition fuzzy. Take the reaction as signal about the underlying product quality.
Building the business case
If you're trying to get budget approval for an AI-native customer service deployment, here's the framework that works:
Baseline your current cost per contact. Add up your fully loaded agent cost (salary, benefits, management, training, attrition) and divide by your annual contact volume. This is your current cost per contact. Be honest — most operations leaders underestimate this number when they only count wages.
Model the AI cost per contact. Take your expected automation rate (use a conservative 60% for year-one budget modeling), multiply by your expected resolution volume, multiply by the per-resolution price. The remainder of your contacts still go to human agents — keep those costs.
Add implementation and maintenance costs. Spread these over 24 months for an apples-to-apples comparison.
Calculate the net savings. For most enterprises with volume above 50,000 contacts per month, the math works. The higher your current cost per contact (BPO-heavy operations are often $12-20 per contact), the better the economics.
Don't stop at cost savings. The second-order benefit is quality consistency. AI doesn't have bad days. It doesn't give one customer a different answer than it gave the previous customer. For contact centers struggling with agent variance, consistency has measurable business value that doesn't show up in cost per contact.
A note on risk allocation
The last thing worth understanding about per-resolution pricing: it doesn't eliminate your risk. It changes where the risk sits.
With per-seat pricing, your risk is headcount: if contact volume spikes, you're understaffed. With per-resolution pricing, your risk is product quality: if the AI performs poorly, you pay for contacts that didn't get resolved, and your human team still handles the overflow. The risk you're buying with AI is not "will someone be available?" but "will the AI actually work?"
This is why pilot terms matter. Any serious AI customer service vendor should offer a pilot — 60 to 90 days, defined scope, measurable outcomes — before you commit to an enterprise contract. Use the pilot to validate the automation rate, test the resolution definition, and check that the re-contact window math actually works in your business context.
If the vendor won't offer a real pilot with real stakes, that tells you something.
We're happy to talk through per-resolution pricing in the context of a MENA enterprise deployment — the numbers look different from US SaaS norms. Reach us at hello@orbitcx.ai.