Our client asked us to help them predict the future.
A tall order, for sure. But at PeopleFish, we hear this kind of thing all the time.
When we launch our $10 subscription next month, how many subscribers can we expect?
When we increase prices 10% in Q3, what will happen to sales?
How will mothers respond to our new baby product when we change the packaging?
Yes, we know — these aren’t requests for a crystal ball consultation. But they boil down to this: What’s going to happen in the future to my product, my business, or my industry if my company takes a certain action?
This particular client was pretty explicit. Their product was set to launch in a month. They’d settled a $19.99 price point, but one of their analysts strongly disagreed — argued they should launch at $24.99.
The question was simple: How much will they sell if they launch at $19.99? And how much will they sell if they launch at $24.99?
In three simple steps, here’s how we answered their question.
First, we helped them define a target market.
This isn’t easy, and it’s hugely dependent on our clients’ intuition and experience with their sales so far. If their product is a brand new concept — something consumers have never seen before — defining a target market typically requires an entire market research project in-and-of itself.
But for this particular client, though the product was something consumers had never seen before, they’d done their homework. Two years of R&D and focus group studies made sure of that. They knew who their target market was, and passed that information to us.
The client had crafted persona narratives — four hypothetical customers, each with their own reasons for buying. In some ways, they are all similar. In others, they are completely different. But as quantitative researchers, we need more specific parameters — specific, bullet-point characteristics for each persona that, while leaving out some details about their personalities, enable us to target their market online.
Male. Middle-aged. High-income. Unmarried. These are the parameters we used to decide who to survey.
Second, we edited and programmed their survey instrument.
This is routine. We do it every day. We’re quick, thorough and professional — always up-to-date on how the newest in survey research technology can enhance findings for our clients.
But behind this efficiency are years of experience with survey research, testing platforms and learning what is and isn’t possible, and what questions must and must not be asked.
For example, say you want to know (like our client) how consumers will respond to different price points. You could ask them straightforward — present both prices, and ask respondents how much they’d be willing to buy at each price. Voila?
Not exactly. The fact is, this yields bad data. The questions themselves are fine, but how you present them biases the way respondents answer. For example, if they see the lower price first, they’ll almost certainly underestimate their willingness to buy at the higher, second price. If they see the higher price first, then vice versa. Additionally, respondents are likely to underestimate their actual willingness to pay in general, subconsciously believing that by expressing a high willingness to pay might drive up this product’s real-world price (face it, we all know how companies use our survey responses).
We account for these biases in various ways. Branching, A/B testing, question randomization. Then there are validated tools, like Van Westendorp series, with entire books dedicated to exploring their usefulness toward predicting real-world behavior.
That said, survey design is an art. And it becomes more complex as the stakes get higher — when even the smallest price shifts and changes to product specifications have wider-scale revenue implications.
Third, we helped them analyze their survey data.
This is tough. Survey data doesn’t speak on its own. And accounting for biases and quirks in each individual instrument (i.e., our confidence in the effectiveness of our client’s in-survey product pitch) is something learned only from experience, watching how our predictions correspond with how our client’s customers respond in the real-world later on.
In this client’s case, our analysis involved multiple demographic cuts, based on small differences in respondents’ relationship status and household income quintile. The wide scale of this product’s launch meant even tiny shifts in price could have major implications on net revenue, and we wanted to maximize that absolutely.
Ultimately, our client determined that the higher price point was more than likely to boost total revenue. In fact, it was almost a definite, given what both what our client told us about their target market, and what we knew from the 600 survey responses we analyzed. The product was launched at the higher price, and our client beat their revenue expectations.
Ok. Why did you read this?
If you’ve made it this far, I’m guessing you have some interest in market research, or some product you’d like to test this way. If so, let’s connect. PeopleFish is ready to talk about your market research needs in as much or as little detail as you’d like — no econometrics degree needed.
Market research can make or break product launches, and even the tiniest of findings can have huge implications on the success of a sales and marketing effort. It’s never the wrong time to investigate your customers’ behavior with a little more rigor.