Configure a Bayesian A/B test

  • Updated
  • Optimizely Feature Experimentation
  • Optimizely Analytics

Follow these steps to configure a Bayesian A/B test in Optimizely Feature Experimentation. This configuration lets you run an A/B test without a predetermined sample size and a strict analysis plan.

Prerequisites

Before you begin, you must have the following:

When to use a Bayesian analysis

Consider using Bayesian analysis for your A/B tests when you prefer a more intuitive interpretation of results, such as direct probabilities of one variant being better than another.

Create a Bayesian A/B test rule

Create an A/B test rule

  1. Select a flag from your Feature Experimentation project.
  2. Create an A/B test rule in your desired environment. See Run A/B tests in Feature Experimentation.

Configure your A/B test rule

  1. Configure your rule as normal.
  2. Select Bayesian for the Stats Configuration

    This replaces the default Sequential (Optimizely Stats Engine) statistical methods with the Bayesian approach. See Statistical analysis methods overview for information on the differences between Sequential (Stats Engine), Frequentist (Fixed Horizon) testing, and Bayesian testing.

  3. (Optional) Update the Chance to beat probability threshold. Must be at least 70%.

Complete your A/B test rule configuration

  1. Configure the remainder of your A/B test rule.
  2. Click Save.
  3. Start your A/B test rule and run your flag to launch your experiment. See Manage rules.

Understand Bayesian results

When your Bayesian A/B test is running and collecting data, the results display with specific Bayesian metrics:

  • Credible interval – A range that contains the true value of the effect with a specified probability.
    • Example – If the estimated lift in conversion rate is 3%, and the 95% credible interval is [1%, 5%], it means there is a 95% chance that the true lift is somewhere between 1% and 5%.
  • Chance to beat – The probability that a variant performs better than the control.
    • Example – If a variant has a 92% chance to beat control, it means there is a 92% probability that its conversion rate is higher than the control's.
  • Probability of being the best arm – For experiments with multiple variants, this indicates the likelihood that a specific variant is the top performer.
    • Example – In a test with three variants, if Variant C has a 75% probability of being the best arm, it means the probability of Variant C having the highest conversion rate among all tested groups is 75%.

By understanding these metrics, you can make informed decisions based on the updated probabilities derived from your experiment data.

Next steps

For information, see the following documentation: