
Your Wins Are Lying To You
Good companies study their losses. Win–loss reviews, pipeline analysis, postmortems on stalled deals. The assumption is simple: losses teach you where the problems are. If

Good companies study their losses. Win–loss reviews, pipeline analysis, postmortems on stalled deals. The assumption is simple: losses teach you where the problems are. If

Every pricing conversation eventually lands in the same place. Charge what a buyer is willing to pay. It sounds simple. If you can figure that

Most companies assume that adding more features makes their product more compelling. On paper, that logic holds. More capability should increase value. More value should

There’s a common argument floating around about AI agents: Agents will replace workers.Few workers mean fewer seats.Fewer seats breaks per-seat pricing. It sounds logical, and

When companies lose a deal, the explanation is almost always the same. “We lost on price.” Price becomes the default explanation. It’s convenient. It’s simple.

To understand what credits really do, it helps to separate three ideas that pricing conversations often blur together. A value metric is how buyers measure

Companies think they have a pricing problem. Usually it’s a value literacy problem. Value literacy is your ability to help a buyer articulate their own

ARR had a good run. But it’s over. Annual Recurring Revenue became the dominant metric in SaaS for a simple reason. It worked. When marginal

I keep disagreeing with my friend Steven Forth about credits. I’ve learned that when I disagree with Steven, it means I don’t understand the issue

If you spend enough time talking about pricing AI, someone will eventually say this:“Per-user pricing makes no sense for AI.” Conceptually, they are right. Practically,