- Optimizely Web Experimentation
- Optimizely Feature Experimentation
What are Stats Accelerator and Multi-Armed Bandit optimizations?
Stats Accelerator helps you algorithmically capture more value from your experiments by reducing the time to statistical significance. It minimizes time by monitoring ongoing experiments and using machine learning to adjust traffic distribution among variations.
By contrast, multi-armed bandit (MAB) optimizations maximize the performance of your primary metric across your variations by dynamically re-allocating traffic to whichever variation is currently performing best. MAB helps you extract value from the leading variation during the experiment lifecycle so you minimize regret by avoiding the opportunity cost of showing sub-optimal experiences.
The better a variation does, the more traffic a MAB sends its way. A/B tests do not do this. Instead, they keep traffic allocation constant for the experiment's entire lifetime, no matter how each variation performs:
The MAB algorithm indicates that Variation 3 is higher performing from the start. Even without any statistical significance information (remember, the multi-armed bandit does not show statistical significance), it still begins to push traffic to Variation 3 in order to exploit the perceived advantage and gain more conversions. See the article on Maximize lift with multi-armed bandit optimizations for a full explanation of the graph.
Why use Stats Accelerator over a typical MAB or vice versa, and what is the difference?
Stats Accelerator: Minimizes Time. This means traffic is routed to the variation most different from the baseline, regardless of if it is better or worse until statistical significance is reached. Then traffic is routed to other variations.
MAB optimizations: Minimizes Regret. This means traffic is routed to variations that show the best performance. The variation with the best performance has the highest revenue or the most conversions.
Why choose one over the other? If you are looking to make a change to your website or experience that will persist and become a part of your product going forward, stats accelerator would be a great option. Stats accelerator will help you get to a statistically significant result as to which option is best as quickly as possible.
Should you wish to have multiple variations of a small piece of your experience that will dynamically change to maximize performance, especially if you expect which variation is best to change, MAB optimizations could be a good way to go.
When to use Stats Accelerator
One of the most significant advantages Stats Accelerator brings to your experimentation program is its ability to cycle through many options quickly. For this reason, you should consider using Stats Accelerator whenever you want to run an A/B test that includes more than two variations against the baseline (at least three variations total).
Stats Accelerator helps us understand when a variation is better than the baseline/original. Stats Accelerator's main objective is to answer: "Which variation is the most unlike the baseline and ALSO has the most optimistic performance projection?"
Some examples of situations where Stats Accelerator would be the best choice include:
Call To Actions (CTA). You can quickly optimize a given page for lead completion by testing many different copy options for a call to action.
Landing pages. Test several combinations of landing page copy, concept, and design to optimize for registrations, sign-ups, donations, etc., quickly.
Add-to-cart rate optimization. Drive an increase in your add-to-cart rate by showing visitors different default images on product pages or search results.
Search results optimization. Nudge users toward specific options (for example, a travel site might want to encourage visitors to select a particular flight) by showing different results.
Drive traffic to specific pages. By changing the location of the “Recommended Content” section, a media site can increase its clickthrough rates on recommended articles.
When to use a multi-armed bandit optimization
Multi-armed bandit (MAB) is better suited for maximizing conversions for short, temporary experiences. MAB's primary goal is to answer: "Which variation shows the largest reward?"
The following cases may be a better fit for a multi-armed bandit optimization than a traditional A/B experiment:
Promotions and offers. Users who sell consumer goods on their site often focus on driving higher conversion rates. One effective way to do this is to offer special promotions that run for a limited time; for those, the changes you are making are not intended to be permanent, and a MAB optimization will send more traffic to the over-performing variations and less traffic to the underperforming variations for the duration of the promotion.
Headline testing. Headlines are short-lived content that loses relevance after a fixed amount of time. If a headline experiment takes just as long to reach statistical significance as the lifespan of a headline, then any learnings gained from the experiment will be irrelevant in the future. Therefore, a MAB optimization is a natural choice to let you maximize your impact without worrying about balancing experiment runtime and the natural lifespan of a headline.
Webinar. You can boost registration for webinars or other events by experimenting with several different versions of your landing page.
MAB does not generate statistical significance but pushes traffic to the better-performing variations; the reason for the better performance is not important.
Set up a Stats Accelerator experiment
Get to statistical significance faster with Stats Accelerator for a more in-depth look at how Stats Accelerator works and how to interpret your results from a Stats Accelerator experiment.
New to Optimizely Experimentation? Then see creating an experiment in Optimizely Web Experimentation.
Create a Stats Accelerator experiment in Optimizely Web Experimentation
From the Experiments window, Create New... in the upper right-hand corner.
Select A/B Test from the drop-down menu.
Give your experiment a name, description, and URL to target, just as you would with any Optimizely experiment. Then click Create Experiment.
Create your variations in the Visual Editor. For experiments using Stats Accelerator, you should have at least two variations and a baseline. So, three variations total.
Click Metrics from the left-side navigation pane to choose your primary metric. Your experiment will use the primary metric to determine how traffic is distributed across variations.
Click Traffic Allocation from the left-side navigation pane. Under Variation Traffic Distribution, click the Distribution Mode dropdown and select Stats Accelerator.
QA your experiment.
Click Start Experiment to launch your experiment.
Create a Stats Accelerator experiment in Optimizely Feature Experimentation
- Follow the instructions creating an experiment in the developer documentation.
Set up a multi-armed bandit optimization in Optimizely Experimentation
If you have not worked with MABs before, you should first read Maximize lift with multi-armed bandit optimization.
New to Optimizely? Then see creating an experiment in Optimizely Web Experimentation.
Create a MAB optimization in Optimizely Web Experimentation
From the Experiments window, click Create New... in the upper right-hand corner.
Select Multi-Armed Bandit from the drop-down menu.
Give your MAB a name, description, and a URL to target, just as you would with any Optimizely experiment. Then click Create Bandit.
Create at least two variations in the Visual Editor.
Click Metrics from the left-side navigation to choose your primary metric. Your MAB will use the primary metric to determine how traffic is distributed across variations.
Once you start your MAB, you will not change the primary metric, so choose carefully!
QA your MAB.
Click Start Multi-Armed Bandit. to launch your optimization.
Create an MAB optimization in Optimizely Feature Experimentation
- Follow the instructions on how to run a multi-armed bandit optimization in the developer documentation.
Before using Stats Accelerator and MABs in your experimentation program, read the following articles to learn more about when to use each, how they work, and how to interpret results: