- Optimizely Web Experimentation
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack (Legacy)
Once you decide on a hypothesis, you can design an experiment. Experiment design is a key part of the cost calculation of experimentation. The design and scope of your experiment determines how long it takes to reach statistical significance. See Experiment types for more information.
Use this information to consider:
-
Are the results of this experiment valuable enough to justify the amount of traffic or time? Are there other more impactful ideas that you could experiment?
-
Should you reduce the number of variations to speed up the experiment? If so, how would you re-design this experiment?
-
Should you increase the degree of difference between the variation and the original to reach statistical significance sooner?
-
How can you design variations that focus on maximizing lift for your primary goal?
A statistical calculation called the minimum detectable effect (MDE) can help you connect cost to your experiment design. Use it to make informed decisions about your experiment parameters.
You can also use MDE to prioritize experiments and as part of your experimentation roadmap.
Using MDE
Minimum detectable effect (MDE) is a calculation that estimates the smallest improvement you are willing to detect. It determines how "sensitive" an experiment is.
Use MDE to estimate how long an experiment takes given the following:
-
Baseline conversion rate
-
Statistical significance
-
Traffic allocation
You can use Optimizely Experimentation’s Sample Size Calculator to make this calculation.
For example, imagine these parameters:
-
Your baseline conversion rate is 15%.
-
You want to measure statistical significance to 95%.
-
You want to detect a 10% lift at minimum (this is your MDE).
According to the Sample Size Calculator, you would need ~8,000 visitors per variation to reach statistical significance.
In reality, you do not know the actual lift in advance. By estimating the minimum lift you want to detect with a given level of certainty, you establish boundaries for how much traffic or time you invest in this experiment, letting you plan and scope your experiment more accurately.
For example, you design the experiment above with four variations. Your site averages 10,000 unique visitors per week. If you show this experiment to 100% of visitors, it may take 3.2 weeks to reach significance.
- 8,000 visitors per variation x 4 variations = 32,000 visitors
- 32,000 visitors / 10,000 visitors per week = 3.2 weeks
Consider whether the traffic and time is worth it, and how you might design a faster experiment.
Best practices
Here are a few best practices for designing an experiment with MDE:
- Use potential business impact to decide on the sensitivity of your experiment – Many programs trade speed for a less sensitive experiment. Your desire for a lower MDE may increase if a conversion event is directly connected to revenue. This low-MDE experiment would require a larger amount of traffic, but even small amounts of lift in revenue-generating goals can make a big impact.
- Use MDE as a guide rather than an exact prediction – The concept of experimentation is that you do not know what effect a given change will generate. Instead of focusing on the MDE, use the calculation as a guide to set boundaries on the time you are willing to invest and the value you expect to generate.
- Design impactful variations – If traffic is a concern, consider limiting your variation scope to changes that directly influence the primary conversion event.
Please sign in to leave a comment.