- Optimizely Feature Experimentation
- Optimizely Web Experimentation
- Optimizely Performance Edge
Prerequisites
Complete the following to create an experiment scorecard in Warehouse-Native Experimentation Analytics:
- Add your Optimizely account ID in the Warehouse-Native Analytics app settings. To add your account ID, send an email to support@netspring.io.
- Have an experiment in Optimizely Feature Experimentation or Optimizely Web Experimentation.
- Connect to your data warehouse.
- Create a Decision dataset.
Configure the Experiment Scorecard in Optimizely Analytics
Warehouse-Native Experimentation Analytics provides a structured way to compare experiment variations against any business metrics stored in your data warehouse. It simplifies tying experiments to business outcomes and provides deeper insights into user behavior and experiment performance.
- Click + > Experiment Scorecard.
- Select the preferred Experiment and Dataset on the definition page and set the Threshold.
- Define Decision-making metrics. You can choose to use a previously created metric or create one. Learn more about creating Metrics.
- (Optional) Add Guardrail metrics.
- Select a Baseline – it is the default variant against which other variants should be compared.
- Set the time range in the visualization window.
- Click Run to see your analysis. The scorecard displays a statistical significance table, showing which variation performed better based on the selected metrics. The resulting table has the following columns:
- Value – The percentage of unique visitors who triggered the event in the variation.
- Impact – The relative improvement in conversion rate for the variation over the baseline as a percentage for most experiments.
- Statsig – The statistical likelihood that the improvement is from changes you made on the page, not chance. Until Stats Engine has enough data to declare statistical significance, the results page states that more visitors are needed and shows an estimated wait time based on the current conversion rate.
- Confidence Intervals – Uncertainty around improvement. Stats Engine provides a range of values where the conversion rate for a particular experience lies. It starts wide, and as Stats Engine collects more data, the interval narrows to show that certainty is increasing. When a variation reaches statistical significance, the confidence interval lies above or below 0.
- Give a descriptive name to the analysis and save it.
- (Optional) Segment your results by selecting additional cohorts. Select Performed by, Grouped by, or Filtered by, and select the options you want to segment by.
Article is closed for comments.