Notes from the field is a series featuring best practices in optimization from Optimizely Experimentation's 3-star Solutions Partners.
a featured 3-star Optimizely Experimentation Solutions Partner
This article was previously published on Conversion's blog, under the title Spotting patterns - the difference between making and losing money in A/B testing.
Wrongly interpreting the patterns in your A/B test results can lead you to make changes to your site that actually harm your conversion rate. There is more to analyzing the results of your A/B tests than reading off the conversion rate increase to any single step in your funnel. Often, the pattern of the results reveals greater insights than the individual numbers. At Conversion.com, we have run and analyzed hundreds of A/B and multivariate tests.
In our experience, results generally fall into one of five distinct patterns:
-
The Big Winner
-
The Big Loser
-
The Steep Drop-off
-
The Slow Rise
-
The Noisy Signal
In this article we will cover:
-
Why you should track and analyze results across every step of your funnel
-
What the five most common data patterns mean
-
What action you should take when you encounter one of these patterns
Before you identify the 5 patterns, it is important to track and analyze the results at every step of your funnel when you A/B test, not just the step that follows the page where you run an experiment after your test page. By building a detailed pattern of results, you gain visibility that extends beyond conversions on the next page.
Detailed pattern tracking lets you pinpoint the exact step in your funnel where visitors abandon the process and how that differs for each variation of the page that you are testing. This can help to answer the more important question of why they are abandoning the funnel. If the answer to this is not obvious, running user tests or watching recorded user sessions of your test variations can help you develop these insights and design a successful follow-up test.
The Big Winner
This is the type of test result that we all love. Your new version of a page converts at X% higher to the next step than the original and this X% increase continues uniformly all the way to Order Confirmation.
For example, here we see 10% more visitors reaching each step of the funnel.
This pattern tells us that the new version of the test page successfully encourages 10% more visitors who see the experiment to reach the next step of the funnel, and from there onwards they convert. The overall result is a 10% increase in sales, so we would probably implement this new version permanently.
The Big Loser
In this pattern, each step shows a roughly equal decrease in conversion rate. The change that was made has a clear negative impact. All is not lost, however. Often an unsuccessful test can be more insightful than a straightforward winner, as the negative result forces you to re-evaluate your initial hypothesis and understand what went wrong. The good news is, you may have stumbled upon a key conversion barrier for your audience. Addressing this barrier in the next test could lead to the positive result you’ve been looking for.
The opposite of the big winner, this pattern tells us that the new version of the test page causes 10% fewer visitors to reach the next step and from then on those visitors convert at the same rate throughout the funnel. The overall result is a 10% decrease in sales. You would not want to implement this new version of the page.
The Steep Drop-off
“We increased click-throughs by 307%!” You’ve probably seen sensational headlines like this being thrown around by people in the optimization industry. Hopefully, like us, you have developed a strong sense of cynicism when you see these kinds of claims. The first question I always ask is, “But how much did sales increase by?” Chances are, if the reported result fails to mention an impact on final sales, then what they actually saw in their test was the pattern that we have dubbed “The Steep Drop-off.”
Test results that follow this pattern show a large increase in the conversion rate in the first step after the experiment page, but this improvement quickly fades away in the later steps until finally there is little to no improvement on the order confirmation page.
This pattern catches many people off-guard because the large improvement to the step following the experiment feels like a positive result. However, this pattern of results often shows that the new version of the page is pushing through a large number of visitors who have no real intent to purchase. This is illustrated by the sudden drops in conversion rate improvement on each progressive page, as unqualified extra traffic abandons the funnel.
As with all tests, whether this result can be deemed a success depends on the specifics of the site you are testing and what you hope to achieve. If there are clear opportunities to improve the next steps of the funnel such that a portion of extra traffic can be converted, then it could make sense to address those opportunities first and then re-run this test. If these extra visitors are clicking through by mistake or because they are being misled in some way, then you may find it difficult to convert them later no matter what changes you make. Instead, you could be alienating potential customers by delivering a poor customer experience. You will also be adding a lot of noise to the results of any tests you run on the later pages, due to the extra visitors on those pages who are unlikely ever to purchase.
The Slow Rise
The “Slow Rise” pattern is almost the reverse of the “Steep Drop-off” in that we actually see a drop in conversions on the page after the experiment, but an overall increase in conversions to the order confirmation page.
It may seem counter-intuitive to regard this pattern as a positive, because of the initial drop in conversions on the page following the experiment. Arguably, this type of result is actually as good if not better than “The Big Winner.” Here the new version of the test page is creating what is known as a qualifying effect. Visitors who may otherwise have abandoned at a later step in the funnel are leaving at the first step instead. Those visitors who do continue past the test page, on the other hand, are more qualified and convert at a much higher rate. This explains the positive result to Order Confirmation.
Implementing a change that causes this type of pattern means that visitors who remain in the funnel have expressed a clearer desire to purchase. If visitors are still abandoning at a later stage, it is likely that this is caused by a specific weakness on one of those pages. Having removed a lot of the noise from our results in the form of the unqualified visitors, we are left with a much more reliable measure of the effectiveness of the later steps in the funnel. This means that it will be far easier to identify weaknesses in the funnel itself.
As with "The Big Loser,” there are circumstances in which a result like this may not be preferable. If you already have very low traffic in your funnel, reducing that traffic further could make it even more difficult to get statistically significant results when testing on later pages of the funnel. You may want to look at tests to drive more traffic to the start of your funnel before implementing a change like this.
The Noisy Signal
This final pattern is often the most difficult to extract insight from as it describes results that show very little pattern whatsoever. Here we often see both increases and decreases in conversion rate to the various steps in the funnel.
First and foremost, a lack of a discernible pattern in the results of your A/B test can be a tell-tale sign of insufficient data. At the early stages of experiments, when data levels are low, it is not uncommon to see results fluctuate. Reading too much into results at this stage is a common pitfall. Even apparently strong patterns that emerge at these early stages can quickly disappear with a larger sample.
If your test has a large volume of data, and you are still seeing this type of result, then it is likely that the new version of the page is delivering a combination of the effects from "The Steep Drop-off” and the “Slow Rise” patterns. Your changes may be qualifying some traffic, while simultaneously pushing other unqualified traffic through the funnel. If your test involved making multiple changes to a page, try testing the changes separately to pinpoint which result in positive effects and which result in negative effects.
Avoid jumping to conclusions based on single-point increases or decreases in conversions to the page that immediately follows your experiment and always track your results through to the end of your funnel when running tests.