- Optimizely Web Experimentation
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack
When you run experiments, Optimizely Experimentation's Stats Engine constantly analyzes your results to determine whether your results are statistically significant: how unusual your experiment results would be if there was actually no difference between your variation and baseline and the difference in lift was due to random chance alone. With the statistical significance setting, you can set a project-wide significance level at which you would like Optimizely Experimentation to declare significant results (winners and losers) on the Results page.
Trade-offs of changing the statistical significance setting
You should be aware of certain trade-offs associated with changing the statistical significance setting. In general, a higher significance setting is more accurate and increases the time required for Optimizely Experimentation to declare significant results because it requires a larger sample size. A lower statistical significance level decreases the amount of time needed to declare significant results, but lowering the statistical significance setting also increases the chance that some of the results will be false positives.
Change the statistical significance setting
To change the statistical setting, go to Settings > Advanced. After you adjust the setting, scroll down and click Save. Allow a few minutes for the new setting to take effect.
For example, for an Optimizely Feature Experimentation project:
For an Optimizely Web Experimentation project:
The primary and secondary metrics are not impacted by the secondary goals and secondary goals are not impacted by primary metrics.
- If you add many metrics and variations that have tiny to no improvement, then it will take longer to reach significance on all your secondary goals.
- If you add metrics and variations that have large improvements, it makes it easier (faster) to find winners and losers overall.
- If all your metrics and variations have roughly the same improvement, it will take roughly the same time to find winners and losers no matter how large the group.
- This removes the hidden risks associated with testing more goals and variations because no matter how many you test, you will now see a correctly adjusted statistical significance.
When swapping out the primary goal, this by itself does not trigger a recalculation of the results on this goal as the new primary.