- Founders' Newsletter
- Posts
- Budget
Budget
Founder Newsletter | Issue 17
One of the funny things about experimentation programs is that they’re not under the same scrutiny as the individual things being tested in them.
Was changing X or Y successful? Easy to answer. Run the test (in intelligems), check the data. Is our overall experimentation program successful? I sometimes hear our customers ask that question, but not nearly as often.
At the highest level, it’s super easy to answer - go look at the “lift” from all of your tests (the extra profit per visitor you found), sum that up, and multiply it by your traffic.
It gets a little more instructive when you break that down one step further into a formula like this:
Impact from your testing program = Impact from a winning test Test win rate # of tests
Now we’ve got some levers to pull on to make our program more effective:
Impact from a winning test: Reasonably within your control. Are you testing things that actually could make an impact if they work? (“Big Swings” as Adam would say) Are you prioritizing your roadmap in terms of potential impact? Do you have strong hypotheses?
Win rate:. There’s an upward bound to your control here. You actually don’t want to be winning too much (up above 50%), as that probably means you’re testing stuff that you could probably just “roll out” based on gut. And as you get more optimized, this number naturally comes down - it’s harder to find improvements.
# of tests: this is far more within your control, and where I want to dig deeper
There is a natural limit on how many (good) tests you can run. Assuming that you’re testing responsibly (getting to significance and letting tests run long enough), the amount of traffic to your site is going to be a natural limiting factor. It’s not really in your control; your existing customer base + advertising budget controls this.
One way I like to look at your traffic, though, is as your budget for learning and testing. If you’re responsible for testing, you need to make sure that you are maximizing that budget in the same way the paid media folks are trying to maximize the return from their ad budget.
Step 1 is making sure that you actually exhaust the budget. Don’t let it go unspent and untested. If you’re getting 200,000 sessions per month, do your best to get some learning on each of those sessions. Do that and you’ll be WAY ahead of most testing programs. (At a certain scale, you want more than one test running at a time and should use our mutually exclusive test feature to do that. That’s a topic for another Saturday...).
To do this, you need to plan. You need to have a roadmap that’s already prioritized and has your stakeholders bought in, so there’s no debate as you go to launch it. That way, you know what your next test is and have it drafted as you’re ending the last one. No deadtime, no wasted budget.
I know Adam is more the “analogy guy,” but the NBA playoffs are in full swing. I can’t help but make the comparison to basketball analytics, and how the OKC Thunder have optimized their game. For those who don’t follow, the Thunder’s style of play is built around getting more shots per game than their opponents and, within those shots, emphasizing taking higher-percentage ones.
So, if you think about the 48 minutes in a game as “budget”—the Thunder can’t control the amount of time the game is played. They also can’t really control their shooting %. Some nights you shoot well, others not so much. But what they can control moreso is (a) how many shots they get up, and (b) the quality (expected value) of those shots. Their focus is on getting to as many good shots as possible, so that they can maximize that budget.1
They create those extra good shots by forcing their opponents into turning the ball over, which they lead the league in—every turnover is an extra possession and shot for the Thunder. Their defense also creates bad shots from the opponent (teams make less shots against them than any other team in the league), which further broadens the gap in budget/shots-on-goal.2
It is a competitive advantage that has led them to being the top seed in the Western Conference.
Testing programs can operate the same way.
Maximizing the amount of time you run tests is akin to the Thunder creating more shots. Rolling out those winners is akin to taking better shots. Combine them, and you 1) learn faster and 2) benefit from those learnings faster.
Budget, then, is a compounding mechanism in that the more frequently you test, the more frequently you can roll out winning tests. In fact, even when you lose, the learnings from those losses turn into more informed tests, faster, creating more opportunities for bigger impact wins. (Maybe this is the forcing turnovers part of the analogy?)
The value accrues from maximizing the traffic utilization in a way that’s often unappreciated, and can become the key to making experimentation programs successful.
1 Unfortunately for all of us who are not Thunder fans, SGA getting to the line for free throws is one of the highest probability shots there is…
2 I’m not sure what the equivalent of “defense” is for CRO programs or how to extend that anaology…open to suggestions