Impact Pricing Podcast

#768: Synthetic Data in Pricing: Trust It, Test It, or Ignore It? with Steven Forth

Steven Forth is a pricing strategist and AI innovator with decades of experience building value-based pricing models. 

As the founder of Value IQ, he blends rigorous pricing theory with emerging AI applications—often pushing the boundaries of how pricing professionals think about data, modeling, and buyer behavior.

In this episode, Mark and Steven step into another live debate aka ‘intellectual challenge’ about AI-generated synthetic data  with real pushback, not polite agreement.

They challenge whether synthetic data is a breakthrough for pricing or just smarter-looking “fake data” that distances us from buyers.

What unfolds is an unscripted stress test of the idea itself, and it ends with a surprisingly human conclusion you should definitely listen to.

Why you have to check out today’s podcast:

  • What synthetic data actually is—and how it differs from simply “making up numbers.”
  • Where synthetic data becomes dangerous, especially when assumptions about buyer behavior go untested.
  • Why even the most advanced AI modeling cannot replace direct conversations with buyers.

————————————————————————————————————————

Private invitation to access ValueIQ (for Impact Pricing listeners)
Use ValueIQ to quickly analyze pricing pages, spot positioning gaps, and pressure-test pricing decisions.
Activate access here.

————————————————————————————————————————

Go out and talk to buyers and understand their buying process.

– Steven Forth

Topics Covered:

00:00 – Why synthetic data is suddenly a pricing topic. Steven introduces Value IQ and the idea behind AI-generated pricing intelligence. The setup: why synthetic data is gaining attention—and why Mark is skeptical from the start.

03:45 – What is synthetic data (without the buzzwords)? A plain-language definition of synthetic data and how it differs from CRM or ERP history. Why backward-looking data limits pricing strategy.

06:30 – The “fake data” objection. Mark challenges the idea head-on: Isn’t this just inventing numbers? A sharp exchange on statistical misuse, p-values, and the danger of generating data that simply confirms what you want to see.

09:30 – Interpolation vs. extrapolation in pricing models. Why most pricing data isn’t normally distributed. Discussion of fat tails, clustering, segmentation signals, and what synthetic data might distort—or reveal.

12:30 – The three types of synthetic data. Steven outlines three practical applications. (1) AI-generated buyer simulations. (2) Stress-testing value and pricing models. (3) Modeling competitive and economic scenarios. This is where the conversation moves from theory to use cases.

16:30 – Can AI predict buyer behavior? Mark pushes the core issue: pricing changes behavior. So how can synthetic data anticipate it? A discussion about assumptions, validation, and ground truth.

20:00 – A practical example: AI-driven Van Westendorp studies. A concrete scenario: simulate 100 real buyers, test pricing sensitivity, validate with actual survey data, and refine the model. A tangible way to experiment responsibly.

23:30 – The risk: Are we moving further from real buyers? The philosophical tension of the episode. Does synthetic data create insight—or another buffer between pricing teams and customers?

26:30 – The surprisingly human conclusion. After 25 minutes of AI debate, Steven’s final advice is simple and grounded: talk to buyers and understand their buying process.

29:00 – Closing thoughts and where to connect. How to reach Steven and Mark—and a final reminder that AI is a tool, not a substitute for customer insight.

Key Takeaways:

“Synthetic data is data that is generated for you by your AI.”– Steven Forth

“With synthetic data, you can explore scenarios that do not yet exist or parts of the market you do not yet touch.”– Steven Forth

Resources and People Mentioned:

  • Craig Zawada – Former McKinsey partner, co-creator of the pocket price waterfall; now Chief Strategy Officer at PROS 
  • Benoit Mandelbrot – Referenced in the discussion about fat-tailed distributions and why pricing data is often not normally distributed.
  • Pocket Price Waterfall – A pricing analytics framework originally developed at McKinsey.
  • Van Westendorp Price Sensitivity Meter – Used as a practical example of how synthetic data could simulate buyer responses.
  • Conjoint Analysis – Discussed as a potential future application for synthetic respondents.
  • Bayesian Updating / Bayesian Statistics – Mentioned as a way to iteratively improve models by aligning synthetic data with real-world results.
  • Interpolation vs. Extrapolation – Statistical concepts debated in the context of synthetic modeling.
  • Normal vs. Fat-Tailed Distributions – Discussion on why pricing data often violates normal distribution assumptions.

Connect with Steven Forth:

Connect with Mark Stiving:

 

Full Interview Transcript:

(Note: This transcript was created with an AI transcription service. Please forgive any transcription or grammatical errors. We probably sounded better in real life.)

Steven Forth

Go out and talk to buyers and understand their buying process. 

[Intro]

Advertisement

Today’s podcast is sponsored by Jennings Executive Search. I had a great conversation with Jon Jennings about the skills needed in different pricing roles. He and I think a lot alike. If you’re looking for a new pricing role, or if you’re trying to hire just the right pricing person, I strongly suggest you reach out to Jennings Executive Search. They specialize in placing pricing people. Say that three times fast.

Mark Stiving

Welcome to Impact Pricing, the podcast where we discuss pricing value and the synthetic relationship between them. I’m Mark Stiving. I run bootcamps to help companies get paid more.

Our guest today, once again, is the brilliant Steven Forth.

You don’t need to know anything about Steven except Value IQ has made a really generous offer to any listeners of impact pricing. 

So Steven, I’m gonna let you describe that if you don’t mind. 

Steven Forth

Happy to, Mark, before we dive in. 

So Value IQ is a value intelligence platform comprised of a number of different agents.

One of the agents that we have taken to market initially is a pricing intelligence agent, and what it does is it’ll take a pricing model and do a pretty sophisticated analysis of what’s good about it, what’s bad about it. 

It applies the Mansard 14 factor analysis. It creates a pricing swot, strengths, weaknesses, opportunities and threats, and compares the pricing to the competitive alternatives.

So the way this works is it’s priced using credits. Something I’m sure that we will discuss again on this podcast this year and every month you get 200 free credits unless you sign up through Impact Pricing in which you get 300 free credits every month. 

A standard analysis costs 30 credits. So if you sign up through impact pricing, you could do 10 of those each month at no charge with no credit card, and a full analysis costs 120 credits.

So you could get two of those each month and have some credit stuffed over. To do some standard analysis or whatever else you might wanna do with them. 

So we’re doing this in appreciation of the role that Mark plays in the pricing community and the importance of the Impact Pricing Podcast. 

And we’re delighted to partner with Impact Pricing on this.

Mark Stiving

Alright, thank you Steven. And I have, I have to say you don’t know this at all, Steven, but someone who’s used Value IQ, I’ve never used it ’cause I don’t have a reason to. Right?

But someone who used your Value IQ sent me the report so that I could see it. And of course I don’t know his business that well and so I couldn’t really evaluate how good it was.

Then I put it into my GPT, the one that knows me really well and thinks, thinks the way I do. 

And I said, tell me what do you think of this? 

And it actually really liked it. Of course it had some comments here and there, but in general it thought it was really well done. That’s a very detailed report and nicely done. So.

Steven Forth

The thing that intrigued me and just, just one last thought. We sent this to Craig Zada, who many people who listen to this podcast will know. 

He’s one of the people who came up with the classic ‘pocket price waterfall’ when he was at McKinsey. He’s now the Chief Strategy Officer at Pros. 

And what Craig did, much to our surprise, is he ran it through the pricing page for guide company up in the mountains where he lives.

I haven’t contemplated that anyone would use this to analyze the pricing of a guide company that takes people deep into the mountains. 

But Craig liked it enough that he actually sent the report to the owner of the guide company. 

So we’re not saying that this is a supported use case because it really is designed for B2B SaaS and for agent AI companies.

But you know Craig, who knows more about pricing than the average bear. Thought highly enough of the report to send it to the owner of a guide company, which is kind of interesting. 

Mark Stiving

Yeah, that says a lot. That says a lot. So, um, like all AI, my advice would be get it, read it, and then interpret it yourself because sometimes AI is not perfect, but boy, I’ll tell you what, it was very insightful, very interesting. 

Steven Forth

Yeah. One of the things we’re trying to train it to do is not to recommend actions. It’s hard though ’cause the AI seem to spontaneously want to recommend actions. So we’re training it not to do that.

It’s really just a report on, this is how well your pricing works. It’s up to you to figure out what you should do about the report. 

Or you can call Mark and have him tell you what you can do. 

Mark Stiving

So, okay, let’s jump into the topic today we are gonna talk about synthetic data, and speaking of telling people what to do, one of your forecasts for the year, Steven, was that synthetic data was gonna become.I don’t remember how you quantified it, but much more popular, much more useful in the world of pricing. 

And you said, ‘Hey, let’s do a podcast on synthetic data.’

I’ve spent almost zero time thinking about synthetic data, so over the last week, my chat and I have had many, many conversations about it so that I could form an intelligent opinion and thought process.

So, we’re about to dive deep into synthetic data. 

Now I’m gonna make a general comment. Before we do synthetic data, I think. You could generally say Steven is overly optimistic and I am overly pessimistic about almost every one of these topics. 

So we’re just gonna set the table with that. Now, Steven, I’m gonna, I’m gonna force you to go slowly if you don’t mind. Because I want everybody to get the concepts that we’re talking about and where they’re used and why they’re used. 

So let’s start with this. Can you really simply define synthetic data and why I care? 

Steven Forth

Yeah. So. Let’s start with what is synthetic data. So generally speaking, synthetic data is data that is generated for you by your AI.

So rather than taking your CRM data or your ERP data, or your CPQ data or whatever data sources you normally use to analyze your pricing, it creates a, let’s call it fake. It creates a data set that is deeper and richer. 

Any data you are able to get from the real world, and all of us who have been involved in pricing I’m sure have been frustrated one time or another, either about the lack of critical data when we are trying to evaluate and design pricing or by just how dirty and inconsistent data can be, and synthetic data offers us a path forward.

The other problem with most of the data that we use in pricing is it’s firmly backward looking and trying to design your pricing. Just using the data available from your current systems, whether it’s your usage data,  your pipeline data, your pricing analytics data.

Tends to be firmly backward looking.

And with synthetic data, you can do things like explore scenarios that do not yet exist or parts of the market you do not yet touch. 

So it allows you to really expand out what you’re doing and how you’re thinking about pricing and come up with more robust future proof pricing models. 

Mark Stiving

Okay? You have no idea how much I dislike this, Steven.

So let’s start with the following. I just wanna throw out a scenario that jumps to mind that I hate. Mm-hmm. So I’m a statistician, I’m running an experiment. My P values don’t come out well enough. 

And so I go to my AI and I say, I need some more data. Make me more data that looks like this. 

And so they run normal distributions around all the different data things that I’ve got.

And now I just, you know, 10x the amount of data, I run the exact same report and my P values now look great. 

Steven Forth

But I think you did two things wrong there, right? Well, actually, a number of things wrong there.

Mark Stiving

Please. 

Steven Forth

The first is you asked for more data that looks like this, and that’s the wrong way to go about generating synthetic data.

There was another thing, but I lost it in my mental shuffle. 

Mark Stiving

That’s okay. Let’s go with that one. 

Steven Forth

But like any tool, synthetic data generation can be used poorly and it can be used to reinforce what you’re currently thinking. 

Actually, I think generative AIs have a bias towards doing that. You know, somehow they wanna be liked.

Mark Stiving

They do. 

Steven Forth

And so if they can sense what you’re looking for, they’ll try to give it to you. So you need to put protection in against that. 

But that becomes part of the skill of the practitioner, is understanding how to go about generating synthetic data and how to go about testing it against the real world. 

So, yeah. You can use any tool badly.

Mark Stiving

By the way. I agree with that statement completely. I’m trying to get to where I can buy into what we can do with this. Right?

By the way, I have an idea or thought on how we can use it, but I don’t wanna give that to you yet. So before we go there, I wanna talk about, you know, go to back to statistics for a second.

There’s a huge difference between interpolation and extrapolation. 

Steven Forth: 

Oh, I’m sorry. I remember what the other thing I want to comment on. It’s normal distribution.

One of the biggest mistakes many of us make is assuming that the distributions we work with are normal.

Mark Stiving

Fair.

Steven Forth

And in fact, most of the data distributions that I’ve looked at for pricing data are not normal. Most of them skew one direction or another. 

Sometimes they’re multimodal, which is fantastic when that happens because that tells you an enormous amount.

And to channel Benoit Mandelbrot, many of them have fat tails. And again, this is something you have to be careful of when you’re generating synthetic data. The AI left to itself will probably wanna generate a normal distribution because we spend so much time incorrectly talking about normal distributions. But, okay. So that was my, the other thing.

Mark Stiving

And I don’t, I actually don’t care what distribution you choose around the data. The point is we’re making data that looks like our data. That was my point. 

But now let’s talk about the difference between interpolation and extrapolation. Right. In statistics, we often think interpolation is fine except for multimodal distributions, which you just pointed out.

It kind of causes a problem, but in most cases it seems to be acceptable. And as soon as we start saying we’re gonna go to extrapolation, meaning I’m gonna choose or create data or make assumptions around data that’s outside the distribution of the data that I have today, how could you possibly do that with synthetic data and make it make sense?

Steven Forth

First, I wanna push back on interpolation being okay, because what I’ve seen, and we’ve all been doing interpolation for as long as we can remember, right? 

What I’ve seen in a lot of pricing, data analytics, when people have done interpolation, is that they’ve wiped out clustering in the data. 

And clusters are one of the things that we’re often looking for in pricing work because they can signal market segmentations or different buying behaviors and so on.

So I don’t think we should be cavalier about interpolation. 

Extrapolation, yes, it has less certainty than interpolation. Even interpolation, we have to be careful, given my previous caveat. I think what we want to be doing here is two things. One is understanding more dimensions in the data. 

So one of the critical things that we do when we are creating synthetic data is we can add dimensions, but the second thing we need to be doing is then whenever we can, taking that data and measuring it back against the ground truth.

Now, the ground truth is not necessarily your original dataset. The ground truth could be your predictions. How well does this actually predict things? 

Or it could be, how well does it correspond with information we’re getting from other sources, like direct discussions with customers. 

And by the way, in your earlier email on this, I think you caught the Achilles heel of synthetic data, but also any data-centric approach into pricing was that it can take the focus away from the customer, and that’s the real Achilles heel I think, to any approach using synthetic data.

But I think we can demonstrate that synthetic data does give good models that we can verify, that allow us to make effective decisions. 

So it’s not just a question of, you know, pressing the button, getting lots of synthetic data and jumping on that. 

You have to do things like call out your assumptions, test different assumptions, go back to the ground truth, but not just the ground truth in the original data, but ground truth from other sources.

And one of the things that generative AI is quite good at is bringing these things together to give you a holistic view. 

Mark Stiving

Okay. I have three more things I wanna bring up and then I’m gonna ask you the question that says, give this to me in a simple way. 

So the next thing, and you kind of brought this up to me, the biggest problem with synthetic data is we are making assumptions about buyer behavior.

And in truth, when I make a decision about pricing, I am changing buyer behavior.

When I change my pricing model, I’m changing the way my buyers react to my pricing model. 

And so I don’t get how synthetic data is gonna help us to understand how our buyers change their behavior based on the prices or the pricing model that we choose to use.

Steven Forth

So let’s just talk about the types of synthetic data before we try to answer that question. So in my world, there are actually three different flavors of synthetic data that we want to be using in pricing work. 

So one is simulations of people and one can just use the available tools like ChatGPT, or Claude, or whatever your favorite poison is. One can make decent models of people and how they make decisions. 

We could take your new book, put that in as a context document, put in some other research into decision making and pricing, connect it to a person’s LinkedIn profile, connect it to a person’s other social profiles, and then we can see if that gives a good model of how that person would make decisions and how they would respond to different pricing models. 

And the quality of these synthetic humans is getting better and better all the time. And there are a number of companies out there. I linked to one in my substack on this whose job it is to create personas, to create buyer personas, to create user personas. 

And you know, if you look at the research, these things are, they are good enough to give you some initial insights.

They’re not going to replace working with real people. 

And the sort of exploratory conversations you can have in qualitative data analysis when you’re gathering qualitative data. 

But they actually work surprisingly well and think of the conversations some people are having with their GPT or whatever chatbots they use.

These chatbots are getting pretty good at having human-like conversations. They would pass most Turing tests. 

So I think that that is part of the answer to that.

The other form of synthetic data I use a lot is, so, as you know, I like to work with value models and pricing models, both of which are systems of equations.

Systems of equations are full of variables, and you have to explore how those models behave and interact across wide ranges.

Understand behavior of the different models in different scenarios. You mentioned assumptions earlier, so yes, we need to clarify our assumptions, but we can also treat an assumption as a scenario and we can explore different scenarios. If I make this assumption, I’ll get this result. If I make a different assumption, I’ll get a different result. I wanna understand those. 

And then the third bucket of synthetic data that I tend to use is external things. A competitive response. 

If I change my pricing in this way, this competitor is likely to make the following response. 

And there’s a number of ways you can do that. Also, some pricing models are sensitive to things like interest rates or the price of oil or labor rates.

You know, there’s lots of possibilities here, depending on what you’re pricing and generating synthetic data that allows you to explore different scenarios is a really, really powerful way to stress test your models against different scenarios. 

So three buckets of synthetic data when the AI is trying to mimic an individual or an organization when the AI is being used to explore different variables in your models and how they interact.

And  the third being, when the AI is being used to explore different things that could happen in the real world outside of your pricing model, and see what impact that will have. 

We shoulda been doing, or we could have sometimes did do in the past, but let’s face it, it was too much work and nobody was willing to pay us enough money for us to do this to the debt that we wanted to.

Mark Stiving

Okay. I’m gonna address all three of your buckets, if I may. 

So the first bucket I am not sure I buy yet, but I’m sure that if I saw it enough that maybe it would, I work with enough companies that sell B2B and I think about their customers and their customers. 

I don’t know how that would’ve been in a standard simulation right now.

Maybe if I was selling a CRM that would be in a standard simulation. 

Most things that we’re selling are not as common as a CRM, and so it’s hard to say, yeah, that’s gonna be in this simulation of people. 

The second bucket you bring up, I actually like a lot, and lemme tell you the, the one time that I use, I’m gonna say the word fake data and you could tell me if I’m missing this or getting it right, but I used to teach market research at university and one of the things that I would always teach is after you’ve crafted your survey.

That you wanna send out before you send it, make up fake data. And then run your analytics and see what you get. Not that I care about the answers. I care that the analytics worked for you. Right. 

Did you ask the question in the right way? 

Steven Forth

Yeah. 

Mark Stiving

Right. Does everything work? 

And so to me, that’s a beautiful usage for synthetic data, for fake data, and it sounds to me like that’s what you mean when you say you’re number two, right? You’re trying to see how the models work together, and so we’re really using it to test models and play with models. Totally, totally In favor of that.

The third one where we talk about external data, and even if we get rid of number one, and we call real life customer data, external data for a second, right?

One of the things I really like about that thought process, and I hadn’t even thought about it until you started talking about this and that is there are these models for Bayesian updating that we could do, Bayesian statistics to figure out, Hey. Maybe I don’t know what the right model is, or I don’t know what the right data is, but I can keep manipulating this synthetic data until it actually matches the real world that we’re seeing.

And that’s pretty powerful. Yeah. I think that makes a lot of sense. 

Steven Forth

Yeah. And I think that’s why in this process that I’m suggesting, I suggest a flow here where you generate the data, you analyze it, you validate it against. What I’m calling the ground truth or the real world data, and then you improve your generated data and that’s a cycle that you flow through, which I’ve given the cute name ‘Gavi’ because it reminds me of a great variety and you know, how good are these synthetic people?

I think it depends on the provider and you know how deep you go into it and what questions you wanna ask. 

But you know, remember the, the now old quote, ’cause it’s from two years ago, that the current version you’re using is the worst version you will ever use. 

And I also want to introduce a bit of a crazy idea here.

Increasingly the people that are evaluating your software are other AIs. So a lot of the buying processes are being taken over by an AI, and I think that AIs will be able to do an okay job of mimicking other AIs. 

I know that all sounds a bit circular and crazy, but I, I, I really do think that we’re underestimating the importance of AI in the buying process. And, you know, one of the synthetic users you want to create is another AI. 

Mark Stiving

Okay. By the way, my answer to your three bullets were the other two things I wanted to chat about anyway. 

So here’s what I want you to do. If you could give us a relatively simple scenario that says, here’s what you can go do with synthetic data, put it in the pricing realm.

I don’t care if you wanna talk about value models, pricing models, anything, but keep it as simple as possible so that we all walk away saying, oh, I can go do that. 

Steven Forth

Yeah. Okay. I wish you’d given me this challenge before. Okay, so a minimally simple one. 

I’m gonna start with the synthetic users. So identify 100 people that you want to sell to. Take a bunch of information about how people make buying decisions. Take their LinkedIn profile, put those together, and ask what would make this person decide to buy my product. 

Mark Stiving

So you mean a hundred real people that actually exist? 

Steven Forth

Yeah. 

Mark Stiving

Okay.

Steven Forth

Yeah. And you could even push this to do a Van Westendorp study on those hundred people and say, you know, and ask your four Van Westendorp questions.

And then if you have the access to those people, you could actually get them to do a Van Westendorp study. It’s really hard to get people to do studies these days, but anyway, and compare the results and see how accurate they’re, and that’s a fairly straightforward thing that you could do without using anything fancier than what you are already using with Gemini or Claude or Perplexity or ChatGPT or Cohere, Mistral. And so. 

Mark Stiving

Okay, so I’m gonna repeat what you said relatively quickly if that’s okay. So you just said, I’m gonna have my AI do a Van Westendorp, make up the answers to Van Westendorp on a hundred people that I actually know, and then I’m gonna go out and actually ask those hundred people. 

And so I’ve got the truth.

I’ve got made up data and now I can take the truth and put it back into my AI and say, Hey, here’s what the real answers were. 

And now we can go choose another hundred people and the next hundred people, we’re probably gonna be closer to the right answers because we did it in the first set. And so we’re essentially building an AI that can predict answers to Van Westendorp based on a LinkedIn profile.

Steven Forth

Yes. 

Mark Stiving

Okay. That’s pretty darn smart. 

Steven Forth

I suspect you can do this with Conjoint as well. I actually asked, I was on a call with someone from Conjointly, which is my preferred tool for doing conjoint analysis and I asked them if they could do this and I had a lot of trouble getting them to understand what I was asking, but I am reasonably sure.

Conjoint is a very smart company and the guy who runs it is super sharp. I suspect that within 12 months Conjointly will offer me synthetic users to run against conjoint studies, and I think Quora, do I have the right company? One of the survey companies is already doing this. 

Of course, the other thing that the survey companies are spending a lot of money on right now is being able to filter out bots that are answering surveys.

I wanna do the opposite. I actually want my bots to be able to take surveys so I can explore people, I don’t necessarily have access to markets I don’t normally talk to. 

I think that this synthetic data gives us a way to expand the number of different scenarios we can imagine and explore, and we can find opportunities in that.

Mark Stiving

Okay, although you haven’t convinced me yet, Steven, here’s what I will say. 

Steven Forth

How I’m gonna convince you is I’m gonna do this further. There’s this fellow, Mark Stiving, you may know him. I’m gonna create my synthetic Mark Stiving. And by the way, Mark Stiving is already creating his synthetic Mark Stiving in the way he’s training his ChatGPT agents. 

Mark Stiving

Yeah, so. Yes and no. Right? So I use chat GPT for everything pricing related, and I don’t use it for anything that’s not pricing related. And so you have to go to perplexity if you wanna know my purchase behaviors or my tech support issues or, but perplexity would have all that.

Steven Forth

Yeah. An aside, one of the things that I try to do each month is I take all of my prompts and I run them back through my AI and say. What should you know? What did I explore this month and what should I have explored given what I did explore? 

Mark Stiving

Nice. 

Steven Forth

I wish it was somewhat easier to do that than it is, if anyone from one of the big AI companies is listening, that’s some functionality I would pay extra to have.

Mark Stiving

Nice. Okay. I wanna pull an analogy with AI, right? I dunno if you remember this, but. A few years ago, I really didn’t wanna learn AI and you personally forced me to do this. 

So now I’m, yes, exactly. So now I’m deeply into all of this stuff, although I’m, you know, I don’t think anybody’s an expert, but I’m way too deep into it.

Steven Forth

Yeah. And if you’re an expert today, you won’t be tomorrow. 

Mark Stiving

Yes, yes. But here’s my viewpoint on AI, and that is it does a million different things. And so you have to step back and say, what’s the three things it’s gonna do for me? Or the 10 things it could do for me? 

And so you just do one at a time. You pick one up and say, oh my God, I can’t believe it does this. Oh my God, I can’t believe it helps me with this. So there is no answer to AI.

And so I would pull the exact same analogy for synthetic data, as in what we just described. I could see that story, but that, I mean, that was hard for me to get to that story. 

And now to just say synthetic data, I’m still with, I don’t need fake data. I need something that matters, right? I need something that’s real. 

And so I think it’s gonna be very similar to where it’s, what’s the use case? Show me I can use synthetic data for that use case and now I can go do it. 

Steven Forth

Yeah. I’m tempted to say something here about causal reasoning, but I’ll save that for when we have Al reasoning.

Yeah, and this is, I think one of the reasons Mark, we need to keep on having these conversations because you and I have quite different views on these things and we think quite differently. And by batting our heads together, we actually get to a useful and interesting place. 

Mark Stiving

Yep. Well, if you create a set of use cases for synthetic data, I would publish them on my content.

Steven Forth

Yeah, I will. I’ll work on that. The next one I’m working at actually takes the email that you sent me and tries to answer some of your points. I wanted to wait till we had this conversation before I finished working on that Substack, but I will. Because I think you have some very provocative points, and one is that, oh my God, all this synthetic data, more data, we’re just gonna get further and further from the actual buyer and the actual user.

Well, this is another buffer between them. 

The last thing pricing people need is another buffer that takes them away from users. I think there’s a lot of truth to that. I don’t think you can do good pricing if all you use, if you’re completely data centric or even completely model centric, you need to talk to real people.

Mark Stiving

Yeah.  I think that’s absolutely a hundred percent true in B2B. Maybe in B2C if you get away with just data. 

But in B2B, we have to go talk to people.

Steven Forth

Yeah. I think even in B2C you need to be doing interviews. You need, anyway.

Mark Stiving

As always, this has been absolutely fascinating. I love these conversations. 

Let’s ask the final question anyway. What is the one piece of pricing advice you’re gonna give our listeners today that you think could have an impact on their business? 

Steven Forth

Go out and talk to buyers and understand their buying process. 

Mark Stiving

Nice. I actually love that answer ’cause it’s directly related to the next book I’m writing, so excellent answer. Steven, if anybody wants to contact you, how can they do that nowadays, since you’re not at Ibakka anymore? 

Steven Forth

So the best email to reach me is [email protected]

And of course through LinkedIn and I’m actually now going and trying to make sure I read every message that people send me on LinkedIn. 

I realize now that I’m doing this, that I was probably missing about a third of the messages people were sending me.

So if you sent me a message on LinkedIn and I didn’t respond, I apologize and I am starting to do better. 

Mark Stiving

Nice. And to our listeners, if you have any questions or comments about the podcast or if you wanna get paid more for the value you deliver, feel free to email me, [email protected]. Now go make an impact.

Advertisement

Thanks again to Jennings Executive Search for sponsoring our podcast. If you’re looking to hire someone in pricing, I suggest you contact someone who knows pricing people contact Jennings executive search.

[Outro]

Tags: Accelerate Your Subscription Business, ask a pricing expert, pricing metrics, pricing strategy

Related Podcasts

EXCLUSIVE WEBINAR

Pricing Best Practices:
How Private Equity Can Drive Value Without Compromising Relationships

Don't miss out on this opportunity to enhance your pricing approach and drive increased value.

Our Speakers

Mark Stiving, Ph.D.

CEO at Impact Pricing

Alexis Underwood

Managing Director at Wynnchurch Capital, L.P.

Stephen Plume

Managing Director of
The Entrepreneurs' Fund