- Optimizely Web Experimentation
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack (Legacy)
Use the Optimizely Experiment Results page to measure your A/B test success. Review each metric and variation to see how visitors respond to your site or application changes. Use segments to investigate visitor behavior in more detail.
Optimizely Experimentation's Stats Engine powers the Experiment Results page. It provides a data-rich view of visitor interactions, includes confidence intervals, and applies false discovery rate control.
The following details are important to know:
- Visitor bucketing – Visitors are randomly bucketed into variations based on your traffic distribution settings, so variation audiences may differ in size.
- Local time and data freshness – Results display in your computer’s time zone and typically become available within five to ten minutes of data ingestion. See Data freshness.
- UI differences for Feature Experimentation – Although most of this article applies, the results page UI varies slightly in Optimizely Feature Experimentation. See Analyze results.
About the results page
To access the Optimizely Results page
- Optimizely Web Experimentation – Go to Experiments > Results or Manage Experiment > View Results.
- Optimizely Feature Experimentation – Go to Reports and select your flag.
At a high level, the Experiment Results page displays the following:
- Status – Shows the A/B test's status. See the difference between publish, start, and pause.
- Experiment health – Indicates experiment health by automatically detecting sample ratio mismatches (SRM). See automatic experiment health indicator.
- Last published – Shows the date when changes were last published to the A/B test.
- Days running – Shows the total full days the A/B test has run. Optimizely Experimentation truncates any floating-point number part of a day. For example, 17.8 displays as 17.
- Audiences – Lists audiences targeted in the A/B test. See.
- Page – (Web Experimentation and Performance Edge only) Lists pages included in the A/B test.
- Visitors – Shows the total number of visitors to the A/B test. See Target visitors with audience conditions.
- Description – Shows the A/B test description, if provided.
- Summarize results – Opens Optimizely Opal and automatically creates a shareable summary of your results. See summarize results.
- Summary – Provides a high-level overview comparing each variation's performance on the primary metric against the original.
- Metrics – Shows results for every metric in the A/B test.
- Segment – Use the segment drop-down lists to filter the results display. See segment results.
- Results last updated and last event – Shows when results were last updated and the timestamp of the most recent event based on your computer's time zone.
- View warehouse-native results – Opens Optimizely Analytics to help you analyze your results with your data warehouse for deeper insights.
Automatic experiment health indicator
Optimizely Experimentation's automatic SRM detection detects any early deterioration of the experiment. An SRM occurs when the traffic distribution between variations in a Stats Engine A/B test becomes severely and unexpectedly unbalanced. To rapidly detect deterioration caused by mismanaged traffic distribution, Optimizely continuously checks traffic counts throughout an experiment. If Optimizely detects a traffic imbalance, the health indicator updates immediately. If your Experiment Health displays anything other than Good, you must investigate why the traffic imbalance occurred. See Causes of imbalances for information.
- Optimizely Web Experimentation and Performance Edge – Paused or archived experiments.
- Optimizely Feature Experimentation – Paused flag rules, flags turned off, or archived flags.
Metrics
The primary metric displays first and is expanded by default. Secondary metrics are collapsed automatically. Click on a secondary metric name to open that single secondary metric, or click Expand all to display every metric.
-
Unique Conversions or Total Conversions – Lets you choose the conversion type when you add a metric.
- Unique conversions display deduplicated conversions, so each visitor counts only once per event.
- Total conversions display every conversion event, regardless of how many times a visitor triggers it.
-
Unique Conversions or Total Conversions – When you add a metric to an experiment, you choose unique or total conversions.
- Unique conversions display deduplicated conversions, so a single visitor who triggers the same event multiple times is counted once.
- Total conversions display a simple total of conversions for the event.
-
Conversion Rate or Conversions per Visitor
- Unique conversions display the conversion rate (the percentage of unique visitors in the variation who triggered the event).
- Total conversions display conversions per visitor (the average visitor conversions in the variation).
- Improvement – The relative improvement in conversion rate for the variation over the baseline as a percentage (for most experiments). For example, if the baseline conversion rate is 5% and the variation conversion rate is 10%, the improvement for that variation is 100%. The improvement is displayed for any experiment where Stats Accelerator is enabled and for multi-armed bandit (MAB) optimizations.
-
Confidence Interval – The confidence interval measures uncertainty around improvement. Stats Engine provides a range of values where the conversion rate for a particular experience lies. It starts wide, and as Stats Engine collects more data, the interval narrows to show that certainty is increasing. When a variation reaches statistical significance, the confidence interval lies entirely above or below zero.
Statistically significant and positive Statistically significant and negative Not yet conclusive Entirely above zero. Entirely below zero. Straddles zero. - Statistical Significance – The statistical likelihood that the improvement is from changes you made, not chance. Until Stats Engine has enough data to declare statistical significance, the Optimizely Results page displays that more visitors are needed and includes an estimated wait time based on the current conversion rate.
Graphs
You can toggle between different graphs for each metric. To see or hide charts, click Hide graph or View graph. Use the drop-down list to select different sets of data.
The x-axis of the graphs represents the end time of each time bucket used in the Stats Engine calculation. Although the axis displays date values, the graphs are not plotted by calendar date. Instead, they visualize 100 equally divided time buckets that span the entire duration of the experiment at the time of calculation.
- Improvement over time (default) – Improvement in this metric for each variation compared to the baseline. See Confidence intervals and improvement intervals.
- Visitors over time – Total visitors for the variations and the baseline.
- Conversions over time – Conversions per day in this metric for each variation, including the original.
- Conversion rate over time – The cumulative conversion rate for each variation, including the original.
- Statistical significance over time – Cumulative statistical significance for the variation.
Segment results
Segmenting lets you filter your Optimizely Experiment Results page to see if certain groups of visitors behave differently from your visitors overall. See Segment your results for default Web Experimentation segments.
Filter by date range
Use the Date Range drop-down list to select start and end dates for your results page. Click Apply. The results generated are in your computer's time zone.
Segment visitors
By default, Optimizely Experimentation displays results for visitors who enter your experiment. However, not all visitors behave like your average visitors. By segmenting your results, you can gain deeper insights into your customers to design data-driven experiments.
Use the Segment drop-down list to learn about a segment of your visitors. You can segment by a single attribute value or a combination of attributes.
All attribute value options are displayed for segmenting, regardless of whether the visitors of a specific experiment have sent those attribute values to Optimizely Experimentation. Because attributes are shared across all Experimentation projects, the Segment drop-down list displays any attribute created in any project. It does not show attributes that have not been associated with a user in any experiment. There is a maximum of 500 values that can be shown per attribute.
For example:
Experiment 1 – Mobile-only A/B test with the audience set to mobile only. The visitors in this experiment send Optimizely Experimentation the attribute mobile_os, stating if they are using Android or Apple devices. The Segment drop-down list displays all attribute values for browser that Experiment 2 sent to Optimizely.
Experiment 2 – Desktop-only A/B test with the audience set to desktop/laptop only. The visitors in this experiment send Optimizely Experimentation the attribute browser with their browser name. The Segment drop-down list displays all attribute values for mobile_os with the options of Android and Apple.
When selecting a segment outside the intended experiment, the page displays 0 and the attribute value options.
See Segment your results on Optimizely Web Experimentation for default segments. You can also segment by up to 100 custom attributes, and 500 values can be shown per attribute in Optimizely Web Experimentation or Optimizely Feature Experimentation.
Change the baseline
Sometimes you may want to see how your variations compare to one variation in particular, which may not be the original. Use the Baseline drop-down list to select a different variation as the baseline.
Share results
Share experiment results
Click Share and copy the URL provided to send your Optimizely Results page to critical stakeholders.
The Share link only gives access to the A/B test's results. Users of the link can segment data, view charts, filter by date range, and more.
If you want to reset the link you shared, click Reset Link. Users with the previous link can no longer access the Optimizely Experiment Results page.
Export the experiment results data
Use the Export CSV feature to download the results of your A/B test in a comma-separated value (CSV) file. You can also share the raw results with others, store the data on your machine, or perform additional analysis.
Click Export to download the CSV file of the results displayed on the page (limited to the Date Range and Segment selected).
See Reference for columns in CSV export files for campaigns and experiments. You can also access your Optimizely Experimentation export data in other ways.
Additional functionality
Manage metrics
Click More > Manage Metrics to add or remove metrics or set a different primary metric.
- The first-ranked metric is your primary metric.
- Metrics ranked 2 through 5 are considered secondary.
- Secondary metrics take longer to reach significance as you add more, but they do not impact the primary metric's speed to significance.
- Finally, any metrics ranked beyond the first five are monitoring metrics.
- Monitoring metrics take longer to reach significance if there are more of them, but have no impact on secondary metrics and no impact on the primary metric.
Stats Engine ensures that the primary metric (which signals whether a variation "wins" or "loses") reaches significance as quickly as possible.
Edit experiment
Click Edit Experiment to make changes to your experiment. Use this option to pause a variation or adjust your traffic distribution.
Reset results
Resetting your Optimizely Experiment Results page affects the data displayed on the page and what Stats Engine calculates for this experiment.
After you click Reset Results, a confirmation page displays an agreement to verify that you understand this is a permanent action for the Optimizely Experiment Results page.
You must select all three boxes to click Reset Results.
You may need to refresh the page to see your change reflected. The Optimizely Experiment Results page now displays as though the experiment has just started, and there is no data yet. Visitor counts, experiment duration, and metrics return to 0. A notation displays the date of the last reset, and the Date Range only shows the current date.
Troubleshoot the Optimizely Experiment Results page
Optimizely Experimentation caches your result data for displaying on the Optimizely Experiment Results page. Optimizely Experimentation also fetches the latest data and refreshes the results page automatically.
When Optimizely Experimentation cannot fetch the cached data or the results are not yet calculated, Optimizely Experimentation calculates the data in the backend and automatically reloads the page.
In case of a fatal error, you see the following:
After a few minutes, try refreshing the page. If the problem persists, see the troubleshooting Optimizely Experimentation documentation or contact Support.
Please sign in to leave a comment.