Controlled traffic allocation ramping

  • Updated
  • Optimizely Web Experimentation
  • Optimizely Performance Edge
  • Optimizely Feature Experimentation
  • Optimizely Full Stack (Legacy)
  • Optimizely Personalization

Ramping is when you gradually expose traffic to new test variations. The process can introduce inefficiency and risk if you ramp your traffic carelessly. Controlled ramping of your traffic allocation lets you mitigate risk without compromising data accuracy or time to statistical significance

The controlled ramping procedure is optional and meant as guidance for the highly risk-averse and statistically savvy experimenter. Ramping is optional for launching successful programs of experiments with the Optimizely Experimentation platform.

  • Traffic Distribution – Proportion of traffic sent to a particular variation
  • Traffic Allocation – Proportion of total traffic included in the experiment

Controlled ramping 

Controlled ramping has two stages:

When you use controlled ramping, you should change traffic to mitigate risk. You should not use statistical significance to guide the traffic ramping decisions. 

Short ramp stage

The short ramp stage looks for experimental bugs then moves to the maximum power stage quickly. Examples of experiment bugs includes:

  • Corrupted variant assignment
  • Poor variant deployment
  • Changes in your underlying product impact tracking or traffic assignments

You can increment traffic allocation as much as you like (from 1% to 5%) as long as you do it very quickly (as in 24-48 hours). During the preliminary short ramp stage, you must not wait for statistical significance to proceed to the maximum power ramp stage. You should also avoid continuously monitoring for significance during the short ramp period.

See Recommended controlled traffic ramping process for Optimizely's recommendations.  

Maximum power stage

The maximum power stage looks for a winner between variations while balancing speed and precision. After the preliminary short ramp stage, to start the maximum power stage you should:

  1. Reset results. This prevents Simpson’s paradox and time-varying biases.
  2. Set the experiment into the maximum feasible traffic allocation, preferably a balanced, uniform split among the baseline and variations. See change traffic allocation for Web Experimentation and Performance Edge or updating flag rules in Feature Experimentation for more information.
If you have a control and two test variations, a balanced split is 33.33/33.33/33.34 because there are three arms to test. 

Recommended controlled traffic ramping process

Optimizely Experimentation recommends starting the experiment in the short ramp stage with a 10% traffic allocation and a 50/50 variation split.

Once you move to the maximum power stage and launch the experiment, reset your results, then set the traffic allocation to 100%, but keep the same 50/50 variation split.

Once your experiment concludes, you should implement the winning variation into your code. It is important that you do not simply change the traffic distribution to the winning variant in this experiment.

Although it is not recommended, you can implement the variation as a hotfix in the Optimizely app by doing the following,

  1. Pause the current experiment. 
  2. Duplicate the current experiment.
  3. Push all traffic to winning variants in the new, duplicated experiment. 
  4. Label the experiment as hotfix until it can be added to your code.
Do not change the traffic distribution to your variations in your existing experiment after reaching a statistical significant result. Implement the winning variation in your code.

Web Experimentation recommendation example of controlled ramping

Here is an example of what Optimizely recommends for a controlled ramping experience using an Optimizely Web Experimentation A/B test project.

First, during the short ramp stage, set the traffic allocation to 20% with a 50%/50% split between the control (labeled Original) and the variation:

20-percent-allocation.png

Let the A/B test run with a 20% allocation for a maximum of one to two days. During this time, verify that the variation is running correctly. If you notice something incorrect with the variation, stop the A/B test immediately to stop exposing customers to a negative experience right away.

If the variation is behaving correctly, transition to the maximum ramp stage to test for a winner between the control and the variation. To do so, pause this experiment, reset results, and launch the new experiment by setting traffic allocation to 100%:

change traffic allocation.png

At the conclusion of your experiment, implement the winning variation into your code. It is important that you do not simply change the traffic distribution to the winning variant in this experiment.

Although it is not recommended, you can implement the variation as a hotfix in the Optimizely Web by doing the following:

  1. Pause the current A/B test. 
  2. Duplicate the current experiment.
  3. Push all traffic to winning variant in the new, duplicated experiment. 
  4. Label the experiment as hotfix until it can be added to your code.

Feature Experimentation recommendation example of controlled ramping

Here is an example of what Optimizely recommends for a controlled ramping experience using an Optimizely Feature Experimentation project.

First, during the short ramp stage, set the traffic allocation to 20% with a 50%/50% split between the control (labeled the Off variation) and the variation:

configure a-b test rule.png

Let the A/B test run with a 20% allocation for a maximum of one to two days. During this time, verify that the variation is running correctly. If you notice something incorrect with the variation, stop the A/B test rule immediately to stop exposing customers to a negative experience right away.

If the variation is behaving correctly, transition to the maximum ramp stage to test for a winner between the control and the variation. To do so, reset your results, then launch your experiment by setting traffic allocation to 100%:

100-percent-allocation-fx.png

At the conclusion of your experiment, you should implement the winning variation's changes in your code. Although it is not recommended, you can implement the variation as a hotfix in the Optimizely app by doing the following,

  1. Pause the current A/B test rule. 
  2. Create a Targeted Delivery rule. 
  3. (Optional) Label the rule as hotfix until it can be added to your code.
  4. Toggle your targeted delivery rule On.

Impact of low traffic allocation

Although it does not hurt your experiment validity to initially set your experiment to a low traffic allocation (for example, below 10%) and adjust later, it is costly to keep an experiment running for more than a few days at a low traffic allocation. The experiment will not have sufficient power to detect any meaningful impact. Without sufficient power, you risk placing users in a bad experience for too long, which may cause permanent user churn.

Traffic allocation poor practices

Optimizely recommends avoiding the following poor practices:

  • Running an experiment at exceedingly low traffic allocation (1%, 5%, 10%) and holding out for statistical significance before changing allocation.
  • Deploying a variant from an experiment set with an overall low traffic allocation focused on volatile real-value metrics, particularly revenue. Revenue metrics do require a larger sample size. The amount of noise and variance in that particular metric type is usually too high for conclusions to be drawn from a low traffic allocation.
  • Keeping an experiment running for a long time (more than a few days) at a low traffic allocation (for example, below 10%) and adjusting later. The experiment will not have sufficient power to detect a meaningful impact. Without sufficient power, you risk placing users in a bad experience for too long, which may cause permanent user churn.