Running A/B marketing tests
– [Instructor] As you scale up on your online marketing, you’re going to want to start A/B testing your efforts. And an A/B test is really just testing one variation of, say, a landing page or an advertisement against the current running option. You can even A/B test emails,such as the subject line or the content of the message. You might even A/B test an advertisement where you try a different call to action, say, Buy Now versus Buy Today. But the best A/B tests have only one variation, which makes the results easy to understand. If you want to do multivariate testing, you’ll want to use a specialized platform for that.
Now one problem with A/B tests is picking a winner that isn’t statistically significant. And statistical significance basically means the result is unlikely to have happened by chance.And to show you what I mean, let’s look at an example of the result from an advertising test. Now I’ve pulled up an A/B significance test calculator on kissmetrics.com. You can run a Google search for a variety of statistical significance calculators, or you can even download tools to run these tests within itself.
Let’s say that the number of visits on the first advertisement was 600, which yielded 90 conversions. And in the second test, we had 900 visitors and 140 conversions. Well, at a first glance, Version B looks pretty good. I got 140 conversions, that’s a 15.56% conversion rate.So would we then expect that Version B outperformed Version A, meaning we can get rid of Version A and start running all future advertisements using Version B? Well, you’ll see as I scroll down that the results don’t indicate that the outcome is statistically significant.
You see, there’s only a 62% certainty that the changes in Variation B are actually going to improve the conversion rate. Now if I change these numbers slightly, let’s say, we had 160 conversions, we now have a 92% certainty. Now, in this case, kissmetrics is saying it’s still questionable whether the results are statistically significant, but 90% certainty is pretty good. If I up this to 180, you’re going to see that kissmetrics says that the A/B test is statistically significant.
Now each tool is going to alter the level that they feel is statistically significant, but 90% certainty is pretty good when it comes to most marketing decisions. Now, if you don’t see the value in the neighborhood of 90%, then you need to run the advertisement longer to see if you can get more data. If running it longer still returns the same result and it’s not yet statistically significant, then it’s likely not worth making the change. Consider adding A/B testing to your online marketing workflow as a way to incrementally increase your performance.