Grant Gross
Senior Writer

Vendor pricing experiments leave CIOs’ AI costs in flux

Feature
Sep 1, 20256 mins
Artificial IntelligenceBudgetingTechnology Industry

Some AI providers are evolving pricing rates or models every month as competition, a race for market dominance, and high computing costs create competing pressures.

AI vendor pricing
Credit: Rob Schultz / Shutterstock

AI vendors are experimenting with pricing rates and models, creating cost uncertainty for enterprise CIOs deploying the technology, according to industry observers.

While many AI vendors have moved to hybrid pricing models that combine subscriptions with use- or outcome-based pricing, these strategies are not set in stone, says Brian Clark, president of go-to-market initiatives at billing software vendor Chargebee.

In some cases, AI vendors are changing their pricing rates or models every few weeks, Clark says, taking the agile development philosophy and applying it to pricing, with agile product monetization practices, including targeted pricing by persona and by product.

For example, Clark notes that CRM powerhouse Salesforce has offered multiple pricing models for its Agentforce AI product line.

“The top performing companies are changing pricing more than once every 30 days, and that’s agile product pricing,” he says. “Agile monetization has to be agile and flexible and as the different variables change, as enterprise adoption changes, and as LLM costs change.”

In a recent survey, Chargebee found a trend toward hybrid pricing, with 43% of AI and other software vendors mixing subscription with use-based pricing, and another 8% combining subscription with outcome-based pricing. About 16% of vendors still offer subscription-only pricing, according to the survey.

Competing market forces

AI vendors are facing conflicting pricing pressures, as strong competition and races to become category leaders butt up against efforts to generate profits, Clark says.

“Everyone starts when they want to challenge the big players,” he adds. “They start with a license, and they undercut the market. Then they realize their margins are closed, and they need to go to usage to protect the margins.”

The result is frequent price oscillations, Clark says. “If you’re a legacy player, you need to lower the price to compete,” he adds. “If you’re a new player, you’re going to come and undercut prices, but then you raise your price because you’re losing too much money.”

If all this sounds like a huge headache for CIOs that’s because it is, some observers say. AI customers want certainty, but AI vendors are still figuring out what it costs them to deliver results, says Rebecca Wettemann, CEO of IT analyst firm Valoir.

Many AI vendors rely on third-party LLMs that charge them for every API call, making it difficult to predict their internal costs, she says. The race for market dominance is also a factor, as Clark has suggested.

“We’re seeing fluctuating pricing models as vendors try to capture more share of the nascent agentic AI market, because vendors want to capture agent share and are betting that high switching costs associated with rebuilding an agent on another vendor’s platform will make them sticky,” Wettemann says.

Traditional per-seat pricing models common in the SaaS space don’t make as much sense for AI, she adds.

“In an ideal world, AI means the number of seats goes down or organizations accomplish more with the same number of seats,” Wettemann says. “That math doesn’t work for vendors who are now selling fewer seats and paying more for LLM calls.”

Advice for CIOs

The best strategy for both AI vendors and users is predictable pricing focused on value, not seats or LLM transactions. But outcomes or value can be difficult to define and agree on. As a result, many AI vendors are moving toward more flexible wallet models that don’t force customers to commit to a usage pattern but still allow vendors to lock in some pricing predictability.

Wettemann advises CIOs to watch for unexpected pricing increases and to look for flexible spending credits that they can lock in now to take advantage of vendors’ agentic growth strategies. “Remember, the software vendors that don’t have their own LLMs have to pass on, at some point, any pricing increases from the LLM provider to their customers to break even, and those increases are likely coming,” she says.

The current pricing fluctuations are confusing and disappointing, especially when long-awaited AI models are rolled out only at higher price tiers, adds Kevin Carlson, fractional CTO and CISO and AI strategy leader at IT leader placement firm TechCXO.

CIOs should avoid vendor lock-in, and they should strive to understand the different pricing models vendors offer, he advises. Some AI users have turned to FinOps providers.

In addition, CIOs should set budget limits when employees are working with use-based AI tools because API use can drive huge, unexpected costs, Carlson says.

“Require software developers using a model for coding to be an active participant in the process and provide approval along the way,” he adds. “Giving blanket approval for a model to operate unsupervised can lead to a situation where a model continues to troubleshoot a single line of code for hours, without success, all the while charging for each identical API call.”

Software vendor Appfire uses a variety of AI products and has generally focused on contracts to limit price variations, and it has alerts in place when employees interact with use-based AI tools, says Ed Frederici, CTO there. The company has observed recent pricing variability and experimentation from its AI vendors.

“We’re very cautious with consumption-based models,” Frederici says. “If I write an inefficient algorithm or maybe put a lot of feature sets behind AI, I can drive up my cost of goods sold. I’ve seen people make mistakes in one of the cloud environments where overnight they’ll see tens or hundreds of thousands of dollars on expected costs.”

Frederici recommends that AI customers use similar cost controls for use-based AI tools as they do with their cloud computing providers. A good cloud consumption strategy can serve as a blueprint for AI consumption, he says.

Over the long term, Frederici is optimistic about AI pricing, as competition will drive down prices. But in the near term, customer costs may fluctuate, with healthy competition on one hand conflicting with the high computing and energy costs associated with delivering AI outcomes.

“The more competition, the more you have to be differentiated in some way, but you’re all facing the same price pressure just due to the computational power needed,” he says. “Ultimately, it’s a race to the bottom on pricing for AI, but the question is, ‘How narrow can that margin get before it’s not feasible for you to deliver AI feature sets?’”

Grant Gross

Grant Gross, a senior writer at CIO, is a long-time IT journalist who has focused on AI, enterprise technology, and tech policy. He previously served as Washington, D.C., correspondent and later senior editor at IDG News Service. Earlier in his career, he was managing editor at Linux.com and news editor at tech careers site Techies.com. As a tech policy expert, he has appeared on C-SPAN and the giant NTN24 Spanish-language cable news network. In the distant past, he worked as a reporter and editor at newspapers in Minnesota and the Dakotas. A finalist for Best Range of Work by a Single Author for both the Eddie Awards and the Neal Awards, Grant was recently recognized with an ASBPE Regional Silver award for his article “Agentic AI: Decisive, operational AI arrives in business.”

More from this author