Relevant Products:
- Optimizely Web Experimentation
- Optimizely Performance Edge
This topic describes how to:
- Evaluate whether to change an experiment that is already running
- Understand the impact of changing a running experiment
- Pause experiments and start new ones instead of modifying running experiments
Though the Optimizely Experimentation Editor does not forbid it, we strongly recommend that you do not change an active experiment. What type of changes do we not recommend?
-
Editing active variations after an experiment has started
-
Adding new variations to a running experiment or removing variations
-
Adding, removing, or changing Audiences, URL Targeting, or Traffic Allocation for a running experiment
Why not change a running experiment?
When an experiment is running, we collect the conversion data for all times and compare it to the control group conversion data for the same span of time. If a change is made midway through the experiment, then the effects of that change can only be measured starting at that time.
Why is this bad? Suppose a change that you made improves conversions by 5%. You take note of that change and then decide you are going to add another change to that variation that you "think" will have the same effect.
Suddenly, your conversion rate drops back to the same as the control group. Now you do not know whether the drop is because you made a change or because the original change performed worse than the numbers initially indicated.
Let us say the conversion rate did not drop to the control group rate but dipped to 2%. Again, you can not be sure what caused that dip, but you see it as an all-around improvement. However, the second change may harm conversions, but the positive effect of the first change negated that decrease.
Best practices
If you want to change your running experiment, you must pause the existing experiment and start a new one. This way, you do not contaminate the data from any changes you make. You can easily duplicate an experiment to create a new experiment similar to an existing one. Duplicating experiments will not duplicate the results of the original experiment.
If you intend to pause a variation, the decision be final. You should not later use that information to compare to the still-running variations. If any events impact all variations, the changes will not be reflected by the paused variation.
If you intend to archive an experiment, it is important to note that you will not be able to unarchive it later. You can duplicate an archived experiment to create a new one and receive new results.
These best practices apply to Optimizely Web Experimentation, Optimizely Performance Edge, Optimizely Feature Experimentation, and all A/B testing tools.