Trust

Founder Newsletter | Issue 18

Four years ago, after we had started Intelligems, but before we had any product, I had these mockups in Sketch that I used to show people during discovery calls. We would ask: if we were to help you optimize your prices, which of these options seem most appealing to you?

Option 1 was a mock of a fairly traditional A/B testing UI for testing that we adapted for prices. A list of products with columns for each group with the specified prices; Option 2 was more of a madlibs style UI, where you’d complete a sentence that went something like “I want to maximize {profit, revenue, volume} for {selected products}, but don’t lower the price more than {X} and don’t raise the price above {Y}.”

At the time, we were looking for feedback on how to build the product—and, critically, how much control to give the user. 

In every call where I showed these mocks, the person giving the immediate and obvious choice was Option 1. Why? The first mock, obviously, would provide complete control from the user; they’d have to establish a strategy, then execute on it. The second mock would provide the user with control over the strategy, but the Intelligems would handle the execution. For something as sensitive as prices, the choice was clear.

I share this story more as an anecdote to the idea that ecommerce, but DTC even more specifically, has a complicated history with “black boxes.” 

While we’ve fully accepted Meta’s algorithms to pick better winning creative and targeting combinations than we do, we still have a propensity to second guess (or at least look for the reasoning behind) product recommendation engines and a pretty wide-spread aversion to, say, personalized offers and dynamic pricing.

It’s interesting to consider the current moment in this industry, in part because of the incredible amount of optimism and adoption around LLMs. The other interesting consideration is establishing trust around decision making.

Black boxes—those things that ecommerce has a mixed relationship with—are becoming more “gray boxes,” because models are getting more transparent around how they’re making decisions. They aren’t transparent (we don’t know the weights, for example), but you can “talk” to a LLM in the same way you talk to a human. The best ones talk to you! And that discussion, if you will, can help build trust in the model around certain tasks.

You can’t ask your legacy machine learning-powered product recommendation engine why it showed Product X to Customer A, though. And that, I think, is what has stunted its growth.

At Via, a ridesharing company where Drew and I worked on dynamic pricing, we would get these emails from our CEO asking us why a specific customer got charged a specific amount. Those questions were hard to answer in the beginning. So, we built a dashboard. We could use it to point to the fact it was 1) raining and 2) there was traffic, and, based on those variables, the price came out to $Y. It was defensible and showed the reasoning. It built trust in the pricing model.

When I think about where we’ve been in ecommerce, this is the piece that’s been missing. And it feels like, very quickly now, we’re closing the gap.

Four years ago, it would have been very difficult to solve this problem for our madlibs-style UI. Had we been able to, though, I wonder if that would have been a more competitive choice. 

It might not matter, though, because it feels like our entire industry is going to get there together, all at once. Very, very soon.