Skip to main content
Support Help Center Help Center home page
Submit a request Sign in
  1. Support Help Center
  2. Experimentation
  3. Analyze results

Analyze results

  • Data Retention Policy update and FAQs
  • Why Stats Engine results sometimes differ from classical statistics results
  • Why Stats Engine controls for false discovery instead of false positives
  • Why is my experiment failing to reach statistical significance?
  • The Results page for Optimizely Web Personalization
  • The Experiment Results page
  • Take action based on the results of an experiment
  • How Stats Engine calculates Results
  • Stats Engine: How and why statistical significance changes over time
  • Statistical significance in Optimizely Experimentation
  • Share your results with stakeholders
  • Send all traffic to a winning variation
  • Segment your results on Optimizely Web Experimentation
  • Segment your results in Optimizely Feature Experimentation
  • Run and interpret an A/A test
  • Reference for columns in CSV export files for campaigns and experiments
  • Iterate on campaigns and share results in Optimizely Web Personalization
  • IP Filtering: Exclude IP addresses or ranges from your results
  • Interpret your Optimizely Experimentation Results
  • How Optimizely Experimentation counts conversions
  • How long to run an experiment
  • False discovery rate control
  • Discrepancies in third-party data
  • Data freshness
  • Changing an experiment while it is running
  • Access Optimizely Experimentation Enriched Event Export data
  • Confidence intervals and improvement intervals
Optimizely Website
Deutsch