Impact Pricing Podcast

#390- Disrupting Pricing with AI: Insights from Steven Forth

Steven Forth is a Partner in Ibbaka, a strategic pricing advisory firm. He was CEO of LeveragePoint Innovations Inc., a SaaS business designed to help companies create and capture value. Steven is what I consider one of the great pricing thinkers in our industry.

In this episode, Steven talks about AI and how it is impacting the world of pricing. He also shares some of the improvements we could expect from AI infrastructures in the near future.

 

Why you have to check out today’s podcast:

  • Find out how the emergence of AI improves and disrupts the pricing profession and the trade as a whole
  • Learn how to extract the best and most comprehensive solutions from AI tools
  • Get an idea on how the “big 3” of cloud services might price for their AI services considering their current pricing models

“Many billions of dollars being invested in AI last year, next year, this year. The overall investment is going to be probably in the neighborhood of $300 billion in 2023. So, if we were investing that much money, we better get some value back. And the companies investing that money need to be able to price that value they’re creating.”

Steven Forth

Topics Covered:

02:10 – The questions that need to be answered about the impacts of AI in pricing

04:06 – Examples of an existing value proposition that AI is improving

07:10 – Examples of an existing value proposition that AI is disrupting

10:44 – Is the emergence of AI a challenge to the pricing profession?

12:08 – Can open AI soon generate value models that are better than experts create?

15:58 – Why and how large language models such as Chat GPT are taking over

18:06 – How to guide an AI in giving you comprehensive answers

19:34 – What the pricing of major AI infrastructures looks like

21:02 – How Amazon, Google and Microsoft would possibly price for AI considering their present pricing models

23:28 – Will there be different strengths in the AI of infrastructures Amazon, Google and Microsoft?

24:55 – Differences in AIs: Would different Ais give out different answers to the same questions?

26:19 – Could AI effectively learn pricing from human pricing experts?

28:02 – How Ais could start making outcome-based pricing more practical

32:03 – Connect with Steven Forth

 

Key Takeaways: 

“People who are skilled in the art [of trade] understand how to come up with pricing for disruptive innovation.” – Steven Forth

“Understanding the limitations of these large language models, which GPT is an example of, is also important. And we can come to that. But let’s not forget that the limitations today are not the limitations in six months.” – Steven Forth

“That, I think, is actually one of the emerging skills: To be able to structure a sequence of questions that will guide an AI in giving you meaningful answers.” – Steven Forth

 

People / Resources Mentioned:

Connect with Steven Forth:

Connect with Mark Stiving:   

 

Full Interview Transcript

(Note: This transcript was created with an AI transcription service. Please forgive any transcription or grammatical errors. We probably sounded better in real life.)

Steven Forth

Many billions of dollars being invested in A.I. You know, last year, next year, this year. The overall investment is going to be probably in the neighborhood of $300 billion in 2023. So, if we were investing that much money, we better get some value back. And the companies investing that money need to be able to price that value they’re creating.

Mark Stiving

Today’s podcast is sponsored by Jennings Executive Search. I had a great conversation with John Jennings about the skills needed in different pricing roles. He and I think a lot alike. If you’re looking for a new pricing role, or if you’re trying to hire just the right pricing person, I strongly suggest you reach out to Jennings Executive Search. They specialize in placing pricing people. Say that three times fast.

Mark Stiving

Welcome to Impact Pricing, the podcast where we discuss pricing, value, and the artificially intelligent relationship between them. I’m Mark Stiving, and our guest today is Steven Forth. Here are three things you’d want to know about Steven before we start; And this is the last time I’m going to say three things for Steven. First, he is a partner in Ibbaka. It’s a strategic pricing advisory firm. He is one of my favorite pricing thinkers and he graciously accepted my invitation to be a regular guest on the podcast. So, you’re probably going to hear his voice a little more often. And I love it when I disagree with Steven. So welcome, Steven.

Steven Forth

Mark, I’m excited to be here again.

Mark Stiving

It’s going to be fun. I always enjoy talking with you. You’re smart, you’re very well read. You stay calm when I disagree with you. And I always learn and tweak my own thinking after talking with you. So, thanks for agreeing to talk with us more often.

Steven Forth

Likewise. I appreciate the exchanges and the candor.

Mark Stiving

Cool. So, let’s talk about artificial intelligence today. And first off, I’m just going to open it up to you. What about artificial intelligence do you care about in pricing?

Steven Forth

I think we have to break that into a number of smaller questions. So, the first question I think we need to be concerned with is; How are AI’s going to add value to existing applications? Which is probably the place to start. The second is; Are AI’s going to create disruptive entrants for existing categories? The third is; Are there totally new categories that AI is going to enable? And then finally, because air is becoming so important to so many different applications, how we price the underlying infrastructure and tools and models is going to have lots of ramifications. I think that short term, those are the questions we need to be asking. And there are many billions of dollars being invested in AI. You know, last year, next year, this year. The overall investment is going to be probably in the neighborhood of $300 billion in 2023. So, if we were investing that much money, we better get some value back. And the companies investing that money need to be able to price that value they’re creating. So, I think this is going to be one of the big questions in pricing in 2023. How do we monetize, how do we price all this additional value that we’re creating, which of course assumes that we’re creating value in the first place. But that’s, I guess, going to be part of the conversation.

Mark Stiving

I love the first one. The biggest problem that I see in AI today, I don’t see listed in your four questions. So, that’ll be fascinating. But let’s jump into the first one which says “existing applications”. And I think in the world of pricing, we tend to see AI being used nowadays to predict a customer’s willingness to pay or price point. And I think there’s some good examples where that works exceptionally well.

I’ve had people on the podcast who are trying to price VRBO type of products, right? And so, there’s tons and tons of data about what demand looks like at different points of time and what might drive demand. And so, AI just feels like a perfect solution to that type of application. Or Hertz rental cars, right? I want to know how much I’m going to charge for rental car. There’s tons of data that talks about demand, and then we can read or gather all sorts of data sets on the weather, is it holidays, what’s the airline fees or traffic going for? So, there’s tons of things that we can tie in to say, “Yes, I can make these predictions on customers willingness to pay.”

Steven Forth

Yeah. And basically, that’s doing what we’ve been doing for the last 20 years better with a new technology.

Mark Stiving

Yeah, I agree completely. The thing that bugs me about every one of those examples is it’s based on looking at historical demand and it’s not based on value to the customer.

Steven Forth

Yeah, and I’m going to take that. That’s true. And I think that pricing, the dynamic pricing engines are highly questionable longer term.

By the way, one of my favorite questions to ask the dynamic pricing folks is “Do you price your own software using dynamic pricing?” But I was actually asking, I think, a slightly larger question, Mark, which is, AI is being used to improve the value of many different types of applications, not just pricing applications.

Mark Stiving

To make sure we’re on the same page, though, was that your point four as how are we going to price AI?

Steven Forth

No. I think that my point is that there’s also a whole whack of infrastructure that is used to support the AIs. So, my first one is let’s just improve existing applications. Could be a pricing application, but it could be a CRM. And in that case, I think it’s not a difficult pricing problem. We figure out how much additional value the AI creates and it doesn’t add any new value drivers. All it does is improve the existing value drivers. I think that’s actually, you know, we know how to deal with that. We know how to price that. We just need to get better at it.

Mark Stiving

Right. If we know how to price anything, we know how to price that. But most people don’t understand how to price anything.

Steven Forth

Yes, but I think that’s a separate problem, though.

The second group – and let let’s give concrete examples of each one of these – what you just described, the pricing software solutions. That’s a great example of an existing value proposition that machine learning is improving. Second one is an existing value proposition that machine learning is disrupting. So here is an example: We could use DALL·E. So, DALL·E 2, which is an image generation application, one of several, where you enter prompts and it will come up with images for you. And then you can have it do multiple images so you can adjust them, you can edit them and so on. This has the potential to disrupt both people who make a living doing images – so, graphic artists, and also people who have large catalogs of content. And I forget how much DALL·E charges per image, but I think it’s around $0.20. Now, you probably have to generate about 100 images and go through a bunch of different iterations. But still, 100 times $0.20 is still not very much money.

Mark Stiving

My math says $20, but go ahead.

Steven Forth

Yeah, and I pity a graphic artist who only gets paid $20 an image.

Mark Stiving

And if I wanted to grab an image off of a web page to use on my material, my content, it’s more than 20 bucks.

Steven Forth

That’s right. I don’t know why you would do that. Why don’t you just use DALL·E?

Mark Stiving

Guess I would have to start.

Steven Forth

So, this is highly disruptive, right? Of the existing graphic design and the existing image catalog people. It must be very annoying for the existing image catalog people, because in many cases, their image catalogs were part of the input into the model that is now disrupting them. And this is going to lead to no end of lawsuits. But the cat is out of the bag. I mean, this stuff exists, it’s being used. I think there’s more than a billion images being generated by these systems, not just DALL·E. There are others as well. This is today. We’re not talking about three years from now. We’re talking about what’s already happened.

So, that’s a disruptive example. And you could argue, some people have argued that Chat GPT, which is another open AI solution, could disrupt a great deal of Wikipedia and even Google search. One of the use cases for Google search is I want to find out about something. Chat GPT is often a better way to do that than Google searches. And Chat GPT today is free, just as Google searches. However, you and I can go and we can use the GPT 3.5 model that Chat GPT is based on. So, you can go and license that model and you can build your own applications inside your website. And now we’re getting to this fifth category. But how do you charge for the models? Which is another question. But then there’s the other really in the third category is stuff that we have never thought of: What are the whole new categories of things that AI going to make possible?

Mark Stiving

So, before we go to new categories, let’s talk about disruption for a second. Is there even a challenge in pricing this as disruption? Because it seems it seems to me that this is the same as new technologies displacing other technologies. And I’ve lived through this a lot.

Steven Forth

I think people skilled in the trade understand how to price disruptive technologies. Basically, they have new value drivers, often for new target, new market segments. So again, we know how to do that. I don’t think it’s a challenge to the pricing profession because people who are skilled in the art understand how to come up with pricing for disruptive innovation.

Mark Stiving

I just have to tell you, while we were talking, I typed in “Who is Steven Forth?” It says, “I’m sorry, but I don’t have any information on a person named Steven Forth.” So, then I said, “Well, who’s Mark Stiving?” And they said the exact same thing. “I don’t have any information on a person named Work Stiving.”

Steven Forth

And I couldn’t even find Tom Nagel.

Mark Stiving

Oh, really?

Steven Forth

Well, it finds the philosopher, not the pricing guru.

You know, understanding the limitations of these large language models, which GPT is an example of, is also important. And we can come to that. But let’s not forget that the limitations today are not the limitations in six months.

Mark Stiving

Absolutely. This is going to continue to get better and better. There is no doubt. And I hadn’t thought of using Chat GPT personally as a search engine when I want information. I’m going to just start doing that.

Steven Forth

Well, it’s interesting. It’s one of many uses for Chat GPT. But for this first class, using A.I. to improve existing applications, we know how to price that. That’s not really a challenge. The second one, disruptive one a bit more difficult. But again, we basically know how to do that.

But here’s a thought for you, Mark. Ibbaka makes part of its money by designing value models and pricing models. Could one of these systems design value models? And could it design pricing models? Now, before we answer that, let’s remember that GPT itself has been used to debug code and even to generate code. So, at some point, if open AI or one of its competitors is able to scrape enough value models, I am confident that they will be able to generate value models that are better than what 95% of value model experts are able to create.

Mark Stiving

Okay. I’m going to disagree with that. Now, 100 years from now. You’re right. There’s no doubt. The question is, five years from now. I don’t think there’s any chance. And I think the reason I say that is I think there are very few people in the world that can create a value model today. I mean, I talked to so many people who don’t understand the value of their product. It’s incredible. And in order for AI to learn, or a machine learning to learn, it’s got to have some type of accurate data that says “This is truth”. And yet, most companies aren’t using what you and I would consider the truth. So, it’s just going to go learn bad stuff.

Steven Forth

Well, perfect. Right now, if you just let your large language model building stuff loose on open data sources, it would not learn how to build a value model. It probably would not even learn how to build a good pricing model. But let’s just imagine that Ibbaka and LeveragePoint and decisionlink and a bunch of the major consulting companies, maybe Simon Kucher and Deloitte and a few others said “We are willing to pool all of the value models we have created.” I know there’s legal reasons why we can’t do that, but let’s just have a thought experiment. I suspect that there are a critical mass value models in the world that one of these systems could process and then be able to generate value models. So, it’s not a technology problem. It’s a data privacy and data access problem. And those are going to get solved in most areas over the next decade, I believe.

Now, fortunately, you and I have lots of gray hair. And we make a living talking as well as doing daily models, so maybe it’s not too much of a concern for us. But you know, at some point, I am confident that maybe, and maybe it’s the version of Chat GPT that versed on 7 as opposed to this year where they’re going to come out with GPT 4. But at some point, AIs will be able to generate reasonably good value models and reasonably good pricing models.

Mark Stiving                        

It will be fun and interesting to watch, no doubt. In my mind, it’s a lot like when you think about, “Are computers going to take over the world?” or “Are we going to have the singularity?” And there are just so many things that we don’t know about how we think and what we do to be able to actually say, “Yes, that’s really going to happen.” And so, when I think about value models, I personally have a hard time extracting the value from a buyer. To understand how they think. And the idea that a computer is going to be able to do that on a data set just feels really hard to me.

Steven Forth

Yeah, but it was probably really hard for us to imagine some of the things that Chat GPT can do. Chat GPT has been out since November 30th, I think. And, you know, I had conversations about it in the dog park. I had conversations with it over Christmas dinner. My granddaughter and I are using Chat GPT to write part of the dialog for a novel that we’re coauthoring.

The world changed, you know, and I know that these large language models have been around for probably a decade, and they’ve been getting better and better. But I think they’ve reached a tipping point. And there’s two parts to that tipping point. One is just the sophistication of the models themselves. I think Chat GPT has around 3.5 billion parameters. NVIDIA has a model that I think has more than 5 billion, close to 6 billion parameters. So, they’ve reached a critical mass. And the brilliant thing about Chat GPT is the interface. If most of us had to interact with a large language model directly we would quickly go, “Oh, I don’t have time to do this. This is something for an engineer to do.” But the Chat GPT interface is as simple as it gets. It’s as simple as the Google search interface.

Mark Stiving

I got to admit, it’s really powerful. There is no doubt. I typed in “What is value-based pricing?” And by the way, if you don’t have a dash between value and based, it gets confused. But when you put the dash there, then it knows what it’s thinking about. Let me read the first of three paragraphs to you.

It says “Value-based pricing as a pricing strategy that’s based on the perceived value that a customer places on a product or service. This means that the price of a product or service is determined by how much value the customer believes it will provide them rather than by the cost of producing the product or market rate for similar products.” I could have written that paragraph.

Steven Forth

Not bad, right?

Mark Stiving

Yeah, that was amazing. They didn’t say anything that’s not true in that paragraph.

Steven Forth

Yeah. And when you talk about pricing, unfortunately, it gets weaker rapidly. But the thing with using Chat GPT or any of these systems is not to ask one question. It’s to ask a structured sequence of questions. And you know, there’s one guy who actually got Chat GPT to invent a new language. He was able to do that because clearly, the guy knows far more about linguistics than I ever aspire to. So, he could ask this really intelligent sequence of questions. And that, I think is actually one of the emerging skills: To be able to structure a sequence of questions that will guide an A.I. in giving you meaningful answers.

Mark Stiving

That’s cool.

We’re going to have to jump. But instead of jumping into new categories, can we talk about pricing of the A.I. infrastructure? Because we might run out of time and that one seems much more interesting and difficult.

Steven Forth

Well, unfortunately, I didn’t know that it’s difficult, it’s worth looking at what is happening. The big infrastructure providers are a finite group of people. Amazon has it, Microsoft Azure, Google, Salesforce, IBM Watson. There’s a relatively short list of companies that provide major AI infrastructure. And the big legacy players like the Amazons and the you know, and the Googles tend to price on some form of consumption. So, it’s a very old-fashioned approach to pricing. So, I think one of the really interesting things that we can watch over the next 2 to 3 years is how does that pricing evolve. How does Amazon differentiate its pricing from Microsoft? From Meta? Are the new entrants like open A.I. going to price differently? Because right now, the pricing is very conventional and it has nothing to do with the value that’s being provided.

Mark Stiving

Let’s take a step back from A.I. for just a second, and let’s just talk about AWS or Azure.

These guys do essentially consumption-based, usage-based pricing. Or you might even think it’s probably cost-plus pricing in a lot of ways as to what they’re doing. And different applications running on AWS have much different value propositions. Some are probably squeaking by or losing money and some are probably making millions of dollars a second. And yet, they price in what we would say is not value-based pricing, it’s just consumption based. And I think this is a problem for every infrastructure-type product until you take a step back and say “Instead of selling infrastructure, I’m going to sell you a solution.” And so, I don’t see how that’s going to be different with A.I., is it?

Steven Forth

I think that’s one of the great open questions that we should, you know, come back in six months and a year and two years and keep on asking that. It’s certainly part of my research over the next few years. To look in detail at the differences between how, say, Amazon, Google and Microsoft to pick the big three – are going to be pricing this stuff. And to see if that changes because this is going to be a very important input into many other solutions. Very few companies have the skill or the money or the desire to build out these large language models. You know, Ibbaka and you know your company, we are never going to invest $500 million in building a large language model for pricing. I’m pretty sure.

Mark Stiving

I’m putting my next hundred million into an airplane. Just thought I’d point that out.

Steven Forth

Okay. Well, you could put it into a large language model that could tell you how to build the airplane.

Mark Stiving

There we go.

Steven Forth

Go. But anyway Mark, I think you’re asking the right question. I don’t think the answer is clear yet, but I think it’s something that we need to track carefully and understand the differences as well between how Microsoft versus Amazon versus Google versus the other companies decide to price these infrastructure services and also decide how to price models because, you know, models are also rapidly becoming part of the infrastructure.

Mark Stiving

So, do you think there will be different AI strengths? If you think of these different companies, will these different companies have different strengths in AI, or have they all learned everything?

Steven Forth

No. I think there will be differentiation. And if you compare AWS to the Azure, they’re priced differently and they’re packaged differently. And at a high level, they look the same. But when you get down into the details, they’re not; in terms of their packaging and pricing. And the same thing is going to be true for AIs. Look at NVIDIA. NVIDIA – which is one of the leaders in AI – who would have guessed that a graphics chip company would end up being a dominant player in AI? Only someone smarter than me. But they have one of the world’s most powerful large language models. It’s bigger than GTP, although I know what open AI’s answer to that is. “It’s not just the size of the model, it’s how well it’s tuned.” So, there’s not a linear relationship between size of model and power of the model.

But yeah, the tunings are going to be different by different companies. Different companies are going to tune these models in different ways and some are going to be better at some things than others. And that’s great because that’s an opportunity for differentiation. And with differentiation, comes the opportunity for value-based pricing.

Mark Stiving

Absolutely. In my mind – and I’m not an expert in these areas, I dabble in them – but I think someone can take an application running on AWS and move it to Azure. What I’m curious about is could I take a model that’s running on Chat GPT, or ask a question to Chat GPT, and then ask the same question to NVIDIA and get dramatically different answers because they have very different – I used the word strengths, and I’m not sure that that’s the right word – but data sets or capabilities, there’s something there that says, “Hey, this is very different than that.”

Steven Forth

Isn’t it going to be fun to see? I suspect the answer to your question is for common knowledge. But whatever that is, you know, you’ll get the same answer. But the more precise the question is or the more complex the question is, you’ll start to get rapidly diverging answers. And even on one-line model because remember, these models are probabilistic. They’re Bayesian networks, so, there’s probability involved. You don’t get the same answer twice. You notice in Chat GPT, you have that little button, right? “Ask me again” or whatever it says, “ask again.”

Mark Stiving

Regenerate response.

Steven Forth

Oh, regenerate response. You know, you’ll get a different response because there’s probability at work.

Mark Stiving

Interesting. And so, I could imagine Chat GPT hires you or me, and our job is to read pricing content and say whether it’s accurate or not. So, this is real, this is not real. And so then, Chat GPT becomes a pricing expert because they’ve got that knowledge as opposed to just taking our pricing knowledge that’s out there. Half of which is total BS.

Steven Forth

It’s an interesting question. The problem is that the sort of human train systems are not necessarily all that scalable. And these companies are looking for massively scalable solutions.

Mark Stiving

Yeah, but some place we have to find the truth.

Steven Forth

Yeah, I agree. You know, an interesting story here is the difference between AlphaGo and MuGo. AlphaGo is the initial AI developed by DeepMind that was able to play go better than any human. And the way AlphaGo was trained, AlphaGo was trained on games by go experts. They imported tens of thousands of go games played by humans and looked at the winning strategies that humans used while those into patterns and improved them. That’s not what MuGo did, though. So, MuGo, which is a successor to AlphaGo, trained against itself. And “mu” means nothing in Japanese. So, it’s the Buddhist concept of emptiness. And the fact is that MuGo learned faster and is a more powerful system than AlphaGo. By competing with itself.

Mark Stiving

The difference is, in a game like go, you know the outcome. There was a win, or there was a loss. But in the world of pricing. I don’t know if you won or if you lost. I don’t know if you made a stupid decision, or not a stupid decision. You could have a great product and made a stupid pricing decision, and yet you’re still a successful company.

Steven Forth

Yeah. And that though, comes to the other question we haven’t asked ourselves yet, which is how will all of this impact outcome-based pricing? And in the previous podcast, you were having a conversation with – I can’t remember his name now – but the guy is coming up with a new pricing software platform. And he was saying that his dream is to get to outcome-based pricing. And you were saying that the problem with that is that outcomes have many different causal factors. And that’s true. However, these AI platforms, especially the ones that are based around Judea Pearl’s ideas, are actually pretty good at teasing those causal chains apart.

And, you know, I’m not talking about something in the future. This is already done in health care. In health care and health economics and outcomes research, the HEOR, we’re already pretty good at teasing apart the different causal strands. Those technologies, together with that approach, together with new artificial intelligences, are actually going to make outcome pricing practical. And I think that, again, this could happen much faster than we expect.

Mark Stiving

It would be interesting to see. By the way, his name was Nikhil Kotcharlakota. And I messed it up when I said it in the podcast as well. So sorry, Nikhil.

So, I think that if you trusted the model, you could say, “Yes, it did that”, right? So, let’s talk about your health care example. I am not an expert in this field either, but I would bet that it’s statistically saying this is what drove that outcome.

Steven Forth

The interaction of these things.

Mark Stiving

Fine, but it was statistically driven, statistically proven. And you could take any one individual and you could say, “Odds are really good that this is what drove that outcome.” But it doesn’t mean it was what drove that outcome. And so, I think from an n = many perspective, you’re absolutely right. From an n = 1 perspective, it’s really hard to say “This is the answer to that problem”.

Steven Forth

Yeah. And now I think we’re starting to drift off into a deep philosophical conversation that we probably don’t need to have right now around the nature of causality. And is it inherently probabilistic? I have to say I resisted these statistical models of language for many years. I’ve been involved in language models since I was about 22. My first investment was actually into a company that was building a language model many decades ago, and I for many years was distressed by the fact that probabilistic models work better than any other kind of model. But over the last few years, I’ve just decided, “You know what? That ship has sailed. I lost that argument”. You know, people like Hinton have definitely demonstrated that these probabilistic models are more powerful than other types of models like rules-based models. So, let’s just get with programming and move on.

Mark Stiving

And the great thing about probabilistic models is they’re more accurate more often than non-probabilistic models. It doesn’t mean they’re right. It means they’re more accurate more often.

Steven Forth

Yeah. But that’s true of you and me, too.

Mark Stiving

Wait, wait. We’re not always right.

Steven Forth

Don’t tell anyone.

Mark Stiving

Steven, we’re going to have to wrap this up. I’ve loved it. Great conversation. We’re going to do this again very soon. And thanks for the time. If anybody wants to contact you, how do you want them to do that?

Steven Forth

Best way is by email, [email protected] and I am very easy to find on LinkedIn.

Mark Stiving

Alright, and thank you all for listening. Would you please leave us a rating and a review? And if you want, just send me an email telling me how much you like or dislike Steven and I won’t tell him it was from you. That’d be fun.

And if you have any questions or comments about this podcast or pricing in general, feel free to email me at [email protected]. Now go make an impact.

Mark Stiving

Thanks again to Jennings Executive Search for sponsoring our podcast. If you’re looking to hire someone in pricing, I suggest you contact someone who knows pricing people contact Jennings Executive Search.

 

Tags: Accelerate Your Subscription Business, ask a pricing expert, pricing metrics, pricing strategy

Related Podcasts