- Founders' Newsletter
- Posts
- Curiosity
Curiosity
Founders Newsletter | Issue 44
We’re about a year into this newsletter now, and one of the first ones we shipped was called “Skepticism.”
In it, Drew wrote:
People test because they desire to grow their business; that comes via increasing profit or revenue. And because that’s the end goal, it’s not uncommon for brands to ignore how that revenue or profit grows, so long as it does. This can be a mistake—especially in testing—because the intermediate steps (the “how”) matter a lot.
We spent a lot of that newsletter talking about the value of the intermediate—how instrumenting tests so that the “intermediate” is measured—creates a great testing program. We talked about how curiosity is central to all of that.
What we didn’t talk about, though, is how easy it is to feel like you’re being curious simply because we now have so much data at our fingertips. It’s easy, though, to feel complacent. That easy access to data creates an illusion of understanding, which, in turn, leads to us unintentionally being less curious.
That’s a lot to unpack, but let’s try.
This whole idea was triggered for me earlier this week by an article Drew sent me about a concept called “default blind.” In that article, the author compared software businesses (which are akin to ecommerce businesses given the disintermediation between you and the customer) and brick-and-mortar business, suggesting brick-and-mortar businesses have an advantage: If something doesn’t make sense with your existing truth, you can just ask a customer about their behavior. You can hear from them directly.
Is it easier, in fact, to get quick feedback and good data in an “analog” world of retail, brick-and-mortar shopping than with digital ecommerce?
Dashboards, mostly, make life easy because they aggregate data and map to the thing that someone cares about (continuing with the thread from above, that might be profit). So, a main dashboard for a company that cares about profit, would likely show profit. It would also likely show some of the inputs to profit. But there’s a natural limit to any dashboard’s value, which is its ability to explain its underlying dynamics.
What would be required is a hypothesis, a highly flexible data set, and an indefinite set of inputs that could be maneuvered at will. That would make the dashboard valuable.
What, though, happens if one misses one of the inputs? Or what happens if one of those inputs changes (i.e., it’s no longer as closely tied to delivering profit as it was previously)? We’d probably agree that the dashboard has lost its value to those stakeholders.
The point here is that digital instrumentation, once set up, often has a way of luring you in: You can see the connections between the numbers, because they once were true, you can likely create are logical enough explanation for the continued truth in your head, and revalidating those connections takes a lot of time (that may or may not “pay off”).
This happened in Intelligems recently. We launched a feature and conveniently attributed a positive change in one KPI in our dashboard to that feature. However, when drilled down one level we realized that was not, in fact, the case. Three levels deeper into the drilldown and we had our answer: an unexpected impact from a related change had an outsized impact. Now that’s interesting!
I think, though, that this is true in more than just software. It’s true anywhere you are disintermediated from the customer (even if that’s just via scale), which means the most important tool isn’t ever the tooling, but how you make decisions to use it. Whether it’s watching screen recordings, looking at data, sending a survey, or doing the old school thing and talking to real customers, the value of data cannot be overstated.
Curiosity—or, as Drew framed it, skepticism—ends up being the most valuable piece. You just have to remember it’s needed in the first place.