Create an Experiment Scorecard in Warehouse-Native Analytics

  • Updated

Closed Beta: Warehouse native analytics is in closed beta. Please contact your Customer Success Manager if interested.

  • Optimizely Feature Experimentation
  • Optimizely Web Experimentation
  • Optimizely Performance Edge

The Experiment Scorecard combines your Optimizely Experimentation data with additional information in your data warehouse, enhancing your experimentation results. This integration ensures your data warehouse remains your single source of truth by keeping data securely in place. By leveraging the warehouse-native architecture, you can seamlessly integrate existing data structures, facilitating efficient and effective data modeling for your analytics needs.

Prerequisites

To create an Experiment Scorecard, you must

Configure the Experiment Scorecard

The Experiment Scorecard provides a structured way to compare experiment variations against any business metrics stored in your data warehouse. It simplifies the process of tying experiments to business outcomes and provides deeper insights into user behavior and experiment performance.

  1. Click Experiment Score Card.
  2. On the definition page, choose the preferred Experiment using the selector, select the dataset and set the threshold.es-step2.gif
  3. Then, select the Primary Metric. You can choose to use a previously created metric or create a new one. You have two options to create a new metric -
    1. Numeric aggregation metric block - This template can be used to create simple aggregations over existing columns in your data.
    2. Conversion metric block - This template can be used to create event segmentation style aggregations (for example, count of events or count of users). 
  4. Following this, choose a format for the metric using the drop-down. The format is set to Percentage by default. (Learn more about creating Metrics.)es-step3.gif
  5. (Optional)Then, add Secondary Metrics. You can choose from three options - Conversion, Numeric aggregation, or choose an existing Metric, in this section as well.es-step4.gif
  6. Select a Baseline. Baseline is the default variant against which other variants should be compared. es-step5.gif
  7. Set the time range in the visualization window.es-step6.gif
  8. Finally, click Run to see your analysis. The scorecard displays a statistical significance stats table, showing which variation performed better based on the selected metrics. The resulting table will have the following columns:
    1. Value - The percentage of unique visitors in the variation who triggered the event. 
    2. Impact - Optimizely Experimentation displays the relative improvement in conversion rate for the variation over the baseline as a percentage for most experiments.
    3. Statsig - Optimizely Experimentation shows the statistical likelihood that the improvement is from changes you made on the page, not chance. Until the Optimizely Experimentation Stats Engine has enough data to declare statistical significance, the results page states that more visitors are needed and shows you an estimated wait time based on the current conversion rate.
    4. Confidence Intervals - The confidence intervals measure uncertainty around improvement. Stats Engine provides a range of values where the conversion rate for a particular experience lies. It starts wide, and as Stats Engine collects more data, the interval narrows to show that certainty is increasing. When a variation reaches statistical significance, the confidence interval lies entirely above or below 0.
      expscr-runanalysis1.png
  9. Give a descriptive name to the analysis and save it.es-setp8.png
  10. (Optional)You can segment your results by selecting additional cohorts. Select Performed byGrouped by, or Filtered by, and select the options you want to segment by. es-setp9.gif