Analyze Results
- Data retention policy update and FAQs
- Why Stats Engine results sometimes differ from classical statistics results
- Why Stats Engine controls for false discovery instead of false positives
- Why is my experiment failing to reach statistical significance?
- The Results page for Optimizely Web Personalization
- The Experiment Results page for Optimizely
- Take action based on the results of an experiment
- Stats Engine: How Optimizely calculates results
- Stats Engine: How and why statistical significance changes over time
- Statistical significance in Optimizely
- Share your results with stakeholders
- Send all traffic to a winning variation
- Segment your results in Optimizely Web
- Segment your results in Optimizely Full Stack
- Run and interpret an A/A test
- Reference for columns in CSV export files for campaigns and experiments
- Iterate on campaigns and share results in Personalization
- IP Filtering: Exclude IP addresses or ranges from your results
- Interpret your results
- How Optimizely counts conversions
- How long to run an experiment
- False discovery rate control
- Discrepancies in third-party data
- Data freshness
- Changing an experiment while it is running
- Access Optimizely data export
- Confidence intervals and improvement intervals