This functionality is now available to all users. Contact your Customer Success Manager or Account Executive for information.
- Optimizely Feature Experimentation
- Optimizely Web Experimentation
- Optimizely Performance Edge
Warehouse-Native Experimentation Analytics combines your Optimizely Experimentation data with additional information in your data warehouse, enhancing your experimentation results. This integration ensures your data warehouse remains your single source of truth by keeping data securely in place. By using the warehouse-native architecture, you can integrate existing data structures and facilitate efficient and effective data modeling for your analytics needs.
Prerequisites
Complete the following to create Warehouse-Native Experimentation Analytics:
- Add your Optimizely account ID in the Warehouse-Native Analytics app settings. To add your account ID, send an email to support@netspring.io.
- Have an experiment in Optimizely Feature Experimentation or Optimizely Web Experimentation.
- Connect to your data warehouse.
- Create a Decision dataset.
Configure Warehouse-Native Experimentation Analytics
Warehouse-Native Experimentation Analytics provides a structured way to compare experiment variations against any business metrics stored in your data warehouse. It simplifies tying experiments to business outcomes and provides deeper insights into user behavior and experiment performance.
- Click Experiment Scorecard. On the definition page, select the preferred Experiment and Dataset and set the Threshold.
- Define Decision-making metrics. You can choose to use a previously created metric or create one. You have three options to create a metric:
- Numeric aggregation – Create simple aggregations over existing columns in your data.
- Conversion – Create event segmentation style aggregations (for example, count of events or users).
-
Ratio – Create custom metrics by calculating the ratio of two different events. Learn more about creating Metrics.
- (Optional) Add Guardrail metrics. You can choose from three options (Conversion, Numeric aggregation, and Ratio) or select an existing Metric.
- Select a Baseline. Baseline is the default variant against which other variants should be compared.
- Set the time range in the visualization window.
- Click Run to see your analysis. The scorecard displays a statistical significance stats table, showing which variation performed better based on the selected metrics. The resulting table has the following columns:
- Value – The percentage of unique visitors in the variation who triggered the event.
- Impact – The relative improvement in conversion rate for the variation over the baseline as a percentage for most experiments.
- Statsig – The statistical likelihood that the improvement is from changes you made on the page, not chance. Until Stats Engine has enough data to declare statistical significance, the results page states that more visitors are needed and shows an estimated wait time based on the current conversion rate.
- Confidence Intervals – Uncertainty around improvement. Stats Engine provides a range of values where the conversion rate for a particular experience lies. It starts wide, and as Stats Engine collects more data, the interval narrows to show that certainty is increasing. When a variation reaches statistical significance, the confidence interval lies above or below 0.
- Give a descriptive name to the analysis and save it.
- (Optional) Segment your results by selecting additional cohorts. Select Performed by, Grouped by, or Filtered by, and select the options you want to segment by.
Article is closed for comments.