- Founders' Newsletter
- Posts
- Unintended
Unintended
Founder Newsletter | Issue 16
My house is approaching 90 years old, and what I’ve learned in my years of living there is that old houses have a good deal of surprises.
You might go to replace a bathroom vanity, like I have, and find out that, hey, 90 years ago, the builders didn’t tile behind the vanity, and so your fairly quick, inexpensive project to swap a vanity ends up turning into a project you hadn’t planned.
Welcome to unintended consequences.
In testing, especially price testing, this happens a lot. (In our line of work it’s called “interaction effect.”) And while there might be some similarities in terms of feeling a bit frustrated when there’s no metaphorical tile behind the metaphorical vanity, curiosity is going to get you a lot further in your “renovation.”
I was reminded of this phenomenon when a customer was trying to make sense of a price test. This customer sells a few different categories of products, and, in one of those categories, raised the price on a bundle (which happened to be the volume driver).
Sales for that bundle completely tanked, but, overall, the price change led to the brand selling more SKUs in that category overall. (What the…? was more or less the reaction.)
And it’s not just them: I’ve seen tests where brands raised prices and saw conversion rates go up. I’ve seen tests where brands raised prices and conversion rates stayed the same, but basket composition changed dramatically.
These unintended and seemingly unrelated knock-on effects are, in fact, very related. Drew has written before about the value of having stories. They seem extra valuable here, because figuring out the relationships at play here requires making sense of your customer, what motivates (and demotivates) your customer to buy from your product, and what alternatives they have. That’s a lot to shape, and a story can certainly help here, especially since you don’t need to know the answer definitively, and can take the “confusing” learning to reshape the story and continue testing against it.
To illustrate with the bundle example above:
A story you could shape from the test could be that the bundle price ended up too high, but the brand appeal was still strong enough that a customer was willing to trade down into other SKUs, including single-product SKUs. In other words, you might conclude that the brand didn’t have pricing power on that bundle.
But that might not be correct, and, even if it was, I wouldn’t draw that conclusion from that test.
If we were to extend our house project metaphor, the confirmation on that conclusion is behind a wall. So, another story you could shape from the test could be that the difference in price between the higher bundle price and the regular price for the other SKUs encouraged customers to trade down, because the perceived value of the bundle was reduced, since the cost savings attached to the bundle dropped with the price increase. What might have happened if the brand also raised the prices on the other SKUs in the category?
You can get confirmation on whether that story is true—you can get confirmation on any of the stories you might hypothesize based on this test—but you don’t want to leap to that conclusion.
There are layers to these questions, and if you want to understand the relationship between price and product and price and brand, you need to design your tests to include those questions. That requires figuring out what in your catalogue is related to your customer’s mind. (And that might just take more than one test.)
Otherwise, you might end up with a vanity that doesn’t quite cover the gap in the tiles or a much more expensive bathroom. Not that I would know or anything.