- Founders' Newsletter
- Posts
- Custom
Custom
Founder Newsletter | Issue 15
When Intelligems first launched, I insisted that our product shouldn’t report on any metrics besides purchases and their derivatives (AOV, profit per visitor, revenue per visitor, etc). We were only enabling price testing,so it didn’t seem relevant to me whether a price change got more people to add to cart, or more people to look at a certain page. If the metric didn’t map to whether the customer pulled the trigger, I didn’t care about it.
I’ve been challenged on this by customers more times than I can remember over the last four years, and I've sort of changed my stance. I do see the value in further “up funnel” metrics, especially as it relates to our Content Testing and Personalizations offerings.
While I still view a purchase as the signal that matters for a price test, it’s not the only signal that matters for other tests. And with us recently rolling out a feature to build and track custom metrics, I thought I’d spend a little time talking about how I see more granular, up-funnel metrics being useful for ecommerce brands and for experimentation.
I’ve talked here in a previous newsletter about how the job of an ecommerce site is sales. It needs to convince a customer to part with their money to buy a product they likely have not seen in person and will need to wait several days for. While up-funnel metrics may not pay the bills in and of themselves, they can help you understand the details of how to sell and influence the customer’s intent. You can quantitatively measure certain granular behaviors that you believe will make customers get over that hump to purchase.

I heard a story from a customer this week that highlights the value.
This particular customer was running a series of tests on their PDP (product description page) to help inform a redesign.
When they started this redesign, they started by trying to deeply understand why customers purchase their product. They looked at surveys and qualitative data, and saw that customers deeply cared about the ingredient list of the product, and also enjoyed the visual design of the product. They wanted their redesign to lean into these value props, thinking that it would ultimately sell more customers.
So, before even building a test, they started tracking a couple different behaviors with custom events:
Whether someone scrolled through the image gallery and saw more pictures of the product
Whether someone clicked a link to view the ingredient list
Basically, they worked to match qualitative datapoints (from conversations/survey) to quantitative feedback (from on-site behavior). They could see how good a job the website was doing at “selling” and influencing the customers’ intent.
From there, they built their tests to move those metrics.
The first test had a new design that gave a “peek” of extra images and tempted a customer to look through the gallery. On the surface, it did not drive more profit or revenue. But they were able to learn more by using custom events as a filter. They could see that they did, in fact, get a lot more customers to engage with the image gallery—this UX worked. But, they could also see that customers who viewed the gallery were not actually more likely to purchase. The extra engagement distracted customers from actually making the purchase.
On the flip side, though,this analysis showed them that the customers who viewed the ingredient list were more likely to purchase.They ran several follow-up tests with new UX that made the ingredient list easier to access, and these did improve conversion, revenue, and profit. They also made new UX that provided extra imagery without needing to scroll.
In a normal test, this level of insight would be missing. But, because they were tracking these more in-depth behaviors, they were able to learn about their customers and their behavior. They took bets on which behaviors mattered, got fast feedback, and rolled learning into several new tests. It let them be more specific in their hypotheses (we’re going to get customers to do X, which will lead to Y) which also let them be more specific in their findings.
When we began to discuss the idea of supporting custom events and custom metrics, this was exactly the way I envisioned them being used. This was how I began to understand the value these events could deliver: Custom metrics can help you correlate behavior—which actions seem to matter for convincing people to purchase (and we are still ultimately in the business of convincing and selling :) ), and then also see how you can move them to influence and create more customer intent.
These types of metrics can be a diagnostic tool to point you where to iterate—a kind of leading indicator—and give you a tight feedback loop on if you’re moving the needle, since these metrics will happen more frequently than purchases (the bottom of the funnel). So it was really cool to see a customer using them exactly in this way.
In order for anyone (me, from a product usage standpoint; our customer, from an experimentation standpoint) to get there, though, it required having a story around the customer and how to sell them. Understanding the value propositions to share, the objections the customer may have, the things they may need to be educated on, etc.
I don’t think my stance that the bottom line needs to be measured in every test will ever change, but I do see now that there are a lot of cases where tracking behavioral events can be quite valuable.