- Optimizely Web Experimentation
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack
When you run experiments, Optimizely Experimentation's Stats Engine constantly analyzes your results to determine whether your results are statistically significant: how unusual your experiment results would be if there was no difference between your variation and baseline. Use the statistical significance setting to set the project-wide threshold at which Optimizely Experimentation declares significant results (winners and losers) on the Results page.
Trade-offs of changing the statistical significance setting
You should know certain trade-offs associated with changing the statistical significance setting. In general, a higher significance setting requires more time for Optimizely Experimentation to declare significant results because it requires a larger sample size. A lower statistical significance level decreases the time needed to declare significant results but increases the chance of false positives.
Change the statistical significance setting
To change the statistical significance setting, go to Settings > Advanced.
After you adjust the setting, scroll down and click Save. Allow a few minutes for the setting to take effect.
For example, for an Optimizely Feature Experimentation project:
For an Optimizely Web Experimentation project:
Summary
On the Results Page, the primary metric's statistical significance is calculated independently from all other metrics and monitoring goals.
The primary metric is not impacted by the secondary metric.
- If you add many metrics and variations with tiny to no improvement, it takes longer to reach significance on your secondary metrics.
- Given that the number of metrics stays the same, the larger the (potential) improvements, the easier (faster) to find winners and losers overall.
- Stats Engine adjusts statistical significance for every metric and variation, so testing more does not raise the risk of false positives.
Article is closed for comments.