- Optimizely Web Experimentation
- Optimizely Personalization
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack (Legacy)
Optimizely’s Stats Engine focuses on false discovery rate instead of false positive rate, which helps you make business decisions based on reliable results.
False positive rate can be calculated as the ratio between:
-
The number of negative events incorrectly categorized as positive.
-
The overall number of actual negative events.
In every experiment, there is a risk of getting a false positive result. This happens when an experiment reports a conclusive winner (or loser, depending on the direction of the observed difference), but, there is in fact, no real difference in visitor behavior between your variations.
Here, with traditional statistics, the risk of generating at least one false positive result increases as you add more metrics and variations to your experiment. This is true even though the false positive rate stays the same for each individual metric or variation.
When an experiment runs with more than one variation or metric, these are collectively referred to as multiple hypotheses. It is important to correct the statistical significance calculations when you are performing tests on these multiple hypotheses at the same time. If you perform several hypothesis tests simultaneously, you run into the multiple comparisons problem, where your probability of making an error by basing a critical business decision on a false positive result increases rapidly with the number of hypothesis tests you run simultaneously.
False discovery rate
Optimizely helps you avoid this by controlling errors. Instead of focusing on the false positive rate, Optimizely uses procedures that manage the false discovery rate, which defined like this:
These procedures are designed to control the expected proportion of incorrect conclusive results.
In statistical language, this is the number of incorrect rejections of the null hypothesis (that null hypothesis is the claim that there was no change to visitor behavior due to a particular change to your website).
False discovery rate control is a statistical procedure for simultaneously correcting multiplicity caused by running multiple hypothesis tests. Optimizely's Stats Engine uses a tiered version of the Benjamini-Hochberg procedure for false discovery rate control to correct statistical significance across multiple metrics and variations. The procedure offers a way to increase the power to detect statistically significant differences between variations, all while maintaining a principled bound on the error.
See Peeking at A/B Tests: Why it Matters, and what to do about it pages 1523-1525 for information.
Citations for the specific false discovery rate control algorithms incorporated into Optimizely's Stats Engine are:
- Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1), 289-300. [LINK]
- Benjamini, Y., & Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of statistics, 1165-1188. [LINK]
Example
Here is an example of how false discovery rate control delivers better results in an experiment using multiple variations and metrics. Imagine a hypothetical experiment with five variations and two distinct metrics:
In this experiment, there are ten different opportunities for a conclusive result. The experiment reports two winners; however, one of them (the one labeled "false winner") is actually inconclusive.
If you used the false positive rate as the metric, you might think the likelihood of choosing the wrong winner is ten percent because only one of the ten possible results is incorrect. This is likely an acceptable rate of risk.
However, looking at the false discovery rate, your chances of selecting a false winner are actually 50% because the false discovery rate only looks at actual conclusive results instead of all opportunities for results.
If you were running this experiment, the first thing you probably would do is discard all the inconclusive variation/metric combinations. You would then have to decide which of the two winning variations to implement. In doing so, you would have no better than a 50-50 chance of selecting the variation that actually helps drive the visitor behavior you wanted to encourage.
A false discovery rate of 50% would definitely be alarming. Because Optimizely uses techniques that keep the false discovery rate low—approximately ten percent—your chances of selecting a true winning variation to implement are much higher than if you were using a tool that relied on more traditional statistical methods.
To learn how to capture more value from your experiments, either by reducing the time to statistical significance or by increasing the number of conversions collected, see Stats Accelerator.
Rank your metrics to minimize risk
Consider an experiment with seven events:
- One headline metric that determines the success of your experiment.
- Four secondary metrics tracking supplemental information.
- Two diagnostic metrics used for debugging.
These metrics are not all equally important. Also, statistical significance is not as meaningful for some (the diagnostic metrics) as for others (the headline metric).
Optimizely solves this problem by allowing you to rank your metrics:
- The first ranked metric is your primary metric.
- Metrics ranked 2 through 5 are considered secondary.
- Secondary metrics take longer to reach significance as you add more, but they do not impact the primary metric's speed to significance.
- Finally, any metrics ranked beyond the first five are monitoring metrics.
- Monitoring metrics take longer to reach significance if there are more of them but have no impact on secondary metrics and no impact on the primary metric.
Your chance of making a mistake on your primary metric is controlled. The false discovery rate of all other metrics is also controlled while prioritizing reaching statistical significance quickly on the metrics that matter most.
Like any other statistical test, Stats Engine does not completely shield the user from the effects of randomness. Even in cases where there is no underlying difference between the control and the treatment, differences between characteristics of users bucketed to different variations will generate slight differences in performance between the variants.
These fluke differences may occasionally be significant enough to cause Stats Engine to declare statistical significance on a variation, but this should not happen frequently. The rate at which this happens is controlled according to the confidence threshold set by the user. With a confidence level (statistical significance threshold) of 90%, the user can expect an A/A comparison to result in statistical significance at most 10% of the time.
But notably, this still allows for the occasional false discovery. If the user rarely observes a false discovery, then there is no reason to suspect an issue. If it happens consistently, there may be a need for a deeper dive.
Please sign in to leave a comment.