Optimizely Experiment Results page

  • Updated
  • Optimizely Web Experimentation
  • Optimizely Performance Edge
  • Optimizely Feature Experimentation
  • Optimizely Full Stack (Legacy)

The Optimizely Experiment Results page helps you measure success in an experiment. You can examine each metric and variation to see how visitors respond to changes you made to your site. Segment your results to learn more about your visitors' behaviors.

The Experiment Results page is powered by the Stats Engine, showing you a data-rich picture of how your visitors interact with your site. It includes confidence intervals and incorporates a Stats Engine improvement that reduces the false discovery rate.

Some important things to know:

  • Visitors are bucketed into variations by chance according to the traffic distribution percentage you set, so you may not see the same number of visitors in each variation.
  • Results in Optimizely Web Experimentation and Optimizely Feature Experimentation are in local time, according to the time zone set on your machine. Results are typically available within five to ten minutes of Optimizely receiving the data. See data freshness to learn more.

For Optimizely Feature Experimentation, most of this article is accurate, but the UI might display differently. See Analyze results for information on Optimizely Feature Experimentation results.

About the page

You can find the Experiment Results page in the following ways:

  • Optimizely Web Experimentation – Go to Experiments Results or Manage Experiment View Results. If you are using Optimizely Personalization, there is a slightly different results page.
  • Optimizely Feature Experimentation – Go to Reports and select your flag.

At a high-level the Experiment Results page displays the following information: 

  1. Options to pause, preview, or archive the experiment.
  2. Automatic sample ratio mismatch (SRM) detection experiment health indicator.
  3. The date when changes were last published.
  4. The number of days running. Looks at the exact times an experiment was started or stopped and represents the number of whole days an experiment is or was running. Floating numbers are truncated rather than rounded up. For example, 17.8 becomes 17.
  5. Audiences targeted in the experiment.
  6. Pages included in the experiment.
  7. The number of visitors.
  8. Description of the experiment, if provided.
  9. Summary module – Provides a high-level overview of the experiment and lets you compare how each variation performs for the primary metric compared to the original.
  10. Metrics modules – Displays results for each metric in your experiment. The primary metric is first and expanded by default. Secondary metric results are automatically collapsed. Click the arrow next to the secondary metric name to expand its results or click Expand all to view secondary metric results. 

The Summary and Primary Metric modules provide an in-depth view of your visitors' behavior on your site. Use them to take action based on the results of an experiment. You can also learn how to interpret the results you see in Optimizely Web Experimentation and Optimizely Feature Experimentation.

Metrics

  • Unique Conversions or Total Conversions – When you add a metric to an experiment, you choose unique or total conversions. Unique conversions show deduplicated conversions, so a single visitor who triggers the same event multiple times is counted once. Total conversions show a simple total of conversions for the event.
  • Conversion Rate or Conversions per Visitor Unique conversions shows the conversion rate: the percentage of unique visitors in the variation who triggered the event. Total conversions shows Conversions per Visitor: the average visitor conversions per visitor in the variation.
  • Improvement – Optimizely Experimentation displays the relative improvement in conversion rate for the variation over the baseline as a percentage for most experiments. For example, if the baseline conversion rate is 5% and the variation conversion rate is 10%, the improvement for that variation is 100%. You also see the absolute improvement for any experiment where Stats Accelerator is enabled and for multi-armed bandit (MAB) optimizations.
  • Confidence interval – The confidence interval measures uncertainty around improvement. Stats Engine provides a range of values where the conversion rate for a particular experience lies. It starts wide, and as Stats Engine collects more data, the interval narrows to show that certainty is increasing. When a variation reaches statistical significance, the confidence interval lies entirely above or below 0.
    Statistically significant and positive Statistically significant and negative Not yet conclusive
  • Statistical significance – Optimizely Experimentation shows the statistical likelihood that the improvement is from changes you made on the page, not chance. Until the Optimizely Experimentation Stats Engine has enough data to declare statistical significance, the results page states that more visitors are needed and shows you an estimated wait time based on the current conversion rate.
    Multi-armed bandit optimizations do not calculate statistical significance.

    To learn how to capture more value from your experiments by reducing the time to statistical significance or by increasing the number of conversions collected, see Stats Accelerator.

Automatic experiment health indicator

Optimizely Experimentation's automatic SRM detection discovers any experiment deterioration early. An SRM occurs when the traffic distribution between variations in a Stats Engine A/B experiment becomes severely and unexpectedly unbalanced. To rapidly detect deterioration caused by mismanaged traffic distribution, Optimizely continuously checks traffic counts throughout an experiment. If Optimizely detects a traffic imbalance, the health indicator updates immediately. If your experiment health is showing anything other than Good you must investigate why the traffic imbalance is occurring. See Causes of imbalances for information.

Optimizely Experimentation does not check for visitor imbalances for the following:

View experiment results

Use graphs, date ranges, attributes, and the baseline to determine results. 

Graphs

You can toggle between different graphs for each metric. To see or hide charts, click Hide graph or View graph. Use the drop-down list to view different sets of data.

Graphs.png

  • Improvement over Time (the default) – Improvement in this metric for each variation compared to the baseline. Improvement is a relative improvement, not cumulative. See Confidence intervals and improvement intervals.
  • Visitors over Time – Total visitors for the variations and the baseline.
  • Conversions over Time – Conversions per day in this metric for each variation, including the original.
  • Conversion Rate over Time – The cumulative conversion rate for each variation, including the original.
  • Statistical Significance over Time – Cumulative statistical significance for the variation.

Filter by date range

Use the Date Range drop-down to select start and end dates for your results page view. Click Apply. The results generated are in your computer's time zone.

Segment experiment results

By default, Optimizely Experimentation displays results for visitors who enter your experiment. However, not all visitors behave like your average visitors. By segmenting your results, you can gain deeper insights into your customers to design data-driven experiments.

Segments and filters should only be used for data exploration, not making decisions.

Use the Segment drop-down list to learn about a segment of your visitors. You can segment by a single attribute value or a combination of attribute values. Learn more about using combinations.

All attribute value options are available to segment with for every experiment, even if the visitors of a specific experiment did not send those values. Because attributes are shared cross-project, the Segment drop-down list displays any attribute created in any project. It will not show attributes that have not been associated with a user in an experiment. There is a maximum of 500 values that can be shown per attribute.

For example:

Experiment 1 – Mobile-only experiment with the audience set to mobile only. The visitors in this experiment send Optimizely the attribute mobile_os, stating if they are Android or Apple devices. The Segment drop-down list displays all attribute values for browser that Experiment 2 sent to Optimizely.

Experiment 2 – Desktop-only experiment with the audience set to desktop/laptop only. The visitors in this experiment send Optimizely the attribute browser with their browser name. The Segment drop-down list displays all attribute values for mobile_os with the options of Android and Apple.

When selecting a segment outside the intended experiment, the page shows 0 and the attribute value options.

See Segment your results on Optimizely Web Experimentation for default segments. You can also segment by up to 100 custom attributes, and 500 values can be shown per attribute in Optimizely Web Experimentation or Optimizely Feature Experimentation

Change the baseline

Sometimes you may want to see how your variations compare to one variation in particular—which may not be the original. Use the Baseline drop-down list to select a different variation as the baseline.

Changing the baseline should only be used for data exploration, not making decisions.

Share experiment results

Use the Share feature to send your Results page to critical stakeholders. Click Share and copy the URL provided.

Share.png

The Share link only gives access to the results of that experiment. They can segment data, view charts, filter by date range, and more. 

If you want to reset the link you shared, click Reset Link. Users with the previous link can no longer access the results page.

Export the experiment results data

Use the Export CSV feature to download the results of your experiment in a comma-separated value (CSV) file. This file can view your results data in your favorite spreadsheet program. You can also share the raw results with others, store the data on your machine, or perform additional analysis.

Click the Export icon to download the CSV file of the results shown on the page (limited to the Date Range and Segment selected).

See Reference for columns in CSV export files for campaigns and experiments. You can also access your Optimizely Web Experimentation and Optimizely Feature Experimentation export data through an Amazon S3 bucket.

Manage metrics

Click the More icon > Manage Metrics to add or remove metrics or set a new primary metric. 

  • The first ranked metric is your primary metric.
  • Metrics ranked 2 through 5 are considered secondary.
    • Secondary metrics take longer to reach significance as you add more, but they do not impact the primary metric's speed to significance.
  • Finally, any metrics ranked beyond the first five are monitoring metrics.
    • Monitoring metrics take longer to reach significance if there are more of them but have no impact on secondary metrics and no impact on the primary metric.

Stats Engine ensures that the primary metric (which signals whether a variation "wins" or "loses") reaches significance as quickly as possible.

Edit experiment

Click Edit Experiment to make changes to your experiment. Use this option to pause a variation or adjust your traffic distribution.

Reset results

Reset Results lets you reset your results page. This affects the data displayed on the results page and what Stats Engine calculates for this experiment.

The raw data for your experiment is still available after the reset. If you need to access it, use the Data Export feature. This lets you access your Optimizely Experimentation Events Export data through an Amazon S3 bucket.

After you click Reset Results, a dialog box displays an agreement to verify that you understand this is a permanent action for the results page. It also protects against accidentally enabling resets.

un-checked-reset-results.png

You must select all three boxes to click Reset Results.

reset-results-all-checked.png

Clicking Reset Results dismisses the dialog box. You may need to refresh the page to see your change reflected. The results page should now display as though the experiment has just started and there is no data yet. Visitor counts, experiment duration, and metrics return to 0. A notation displays showing the date of the last reset, and the Date Range should only show the current date. 

Resetting the results does not reset variation bucketing assignments. Resetting the results is a front-end "wipe." As stated above, the underlying raw data events for the past results are still available through data exports. When the results are reset, visitors continue to see the variation they were previously bucketed into and count toward the correct variation.

Troubleshoot the Results page

Optimizely caches your result data for displaying on the results page. Optimizely fetches the latest data and refreshes the results page automatically.

still-loading.png

When Optimizely cannot fetch the cached data or the results are not yet calculated, Optimizely calculates the data in the backend and automatically reloads the page.

loading-no-cache.png

In case of a fatal error, you see the following:

no-results.png

After a few minutes, try refreshing the page. If the problem persists, see troubleshooting Optimizely Experimentation documentation or contact Support.