Forecast

Founder Newsletter | Issue 19

The other week in this newsletter, I posed a rhetorical question—Is our overall experimentation program successful?—that served as a jumping off point for the main topic of the newsletter: that your website traffic can be viewed as your “budget” for learning and testing. 

The idea here is that, if you’re responsible for testing, the fastest path to the most value is to make sure that you’re maximizing that budget in the same way the paid media folks are trying to maximize the return from their ad budget. The first step, though, is to use it all. 

And while I spent that newsletter talking about how that turns into value, I didn’t explicitly answer that rhetorical question: Is our overall experimentation program successful? So, let’s do that.

As I mentioned in my last newsletter, there are three components to generating value in an experimentation program: 

  • Number of tests

  • Win rate

  • Impact from a winning test

So, how do you determine success? You set a forecast. 

A forecast does two things: 1) it helps you set an objective, quantifiable measure of success, 2) it helps you decide where your program should fit in your larger list of priorities and, therefore, how much to invest in your program—be it time, people, research, design, development, software (of course)

Each of these components to an experimentation program are fairly forecastable—number of tests (like we discussed in the previous newsletter) has an upward bound dictated by your traffic volumes, win rate is usually within a bracketed range, impact is usually within a bracketed range, but otherwise within your control.

If you’re new to testing, you might end up with a higher average impact from winning tests and a higher win rate (because you have more to optimize). But you might run fewer tests within your budget (because you haven’t yet built the muscle). For each of these, you’ll need to pull some benchmarks to set your forecast, since you won’t have much in the way of data.

If you’re deep into an experimentation program, you’ll probably end up with lower average impact from winning tests and a lower win rate (because you have less to optimize). But you will likely run more tests within your budget (because you’ve proven you have the muscle). In this case, you can pull your historical data to build a forecast that’s based off your historicals.

While the latter might provide for a tighter forecast, the former is just as helpful. 

Building a dollar value for your experimentation program creates a composite metric that 1) creates a goal and 2) measures your progress toward it.

The closer you are to forecast, the more successful your program. It’s easy to understand, quantifies the results, and helps build momentum.