- Optimizely Web Experimentation
- Optimizely Performance Edge
- Learn the difference between A/B ("split"), multivariate, and multi-page ("funnel") experiments
- Evaluate the advantages and disadvantages of using each type
Optimizely Web Experimentation and Optimizely Performance Edge provides three different experiment types: A/B testing, multivariate testing, and multi-page (funnel) testing. This article gives a brief explanation of these three types, along with some of the advantages and limitations of each, to help you decide which kind of experiment you want to create.
If you are using Optimizely Performance Edge, specific features described in this article will not be available. Optimizely Performance Edge is a lightweight experimentation product that delivers significantly faster performance than previous versions of Optimizely. It relies on a streamlined "microsnippet" that limits the available feature range.
You will see this notation whenever the text describes a feature available in Performance Edge.
Also, available experiment types vary based on your Optimizely plan type. If what you see does not match what is in this article, or you want to learn more about what’s available, refer to our Optimizely Web packages.
See our in-depth articles on:
How to create an A/B test
How to create a multivariate experiment in Optimizely Web Experimentation
How to create a multi-page (funnel) experiment in Optimizely Web Experimentation
Differences among campaigns, experiments, experiences, and variations
Campaigns, experiments, experiences, and variations are terms related to Optimizely Experimentation testing. In general, campaigns have experiences and experiments have variations.
- Campaigns are the framework for organizing your strategy in Optimizely Web Personalization. A campaign takes some content on your site, like the promotions on your homepage, and swaps in different content for different audiences. Read about Personalization campaigns.
- Experiment is Optimizely Experimentation's term for an A/B, multivariate, or multi-page test. You might also see experiments called "tests" or "campaigns." Read the rest of this article for more about experiments.
- Experiences are the different personalized versions of your site that you create for visitors based on the page they're visiting and the audiences they qualify for. Learn more about Personalization experiences. When you are not sure what type of experience will work best for a particular audience, you can test experiences with Experimentation in Personalization campaigns.
- Variations are the alternate versions of your site that you test against your original (or baseline) in Optimizely Web Experimentation. Learn how to create and change variations in your experiments.
A/B testing, also known as split testing, is a method of website optimization in which the conversion rates of two versions of a page—version A and version B—are compared to one another using live traffic. Site visitors are bucketed into one version or the other.
By tracking the way visitors interact with the page, they are shown—the videos they watch, the buttons they click, or whether or not they sign up for a newsletter—you can determine which version of the page is most effective.
A/B Testing is the least complex method of evaluating a page design and is helpful in various situations.
One of the most common ways of A/B testing is to test two very different design directions against one another. You can do this in Optimizely Web Experimentation using a redirect experiment.
The current version of a company's home page might have in-text calls to action, while the new version might eliminate most text but include a new top bar advertising the latest product. After enough visitors have been funneled to both pages, the number of clicks on each page's version of the call to action can be compared.
It is important to note that even though many design elements are changed in this kind of A/B test, only the impact of the design on each page's business goal is tracked, not individual components.
A/B testing is also helpful as an optimization option for pages where only one element is up for debate.
A pet store running an A/B test on their site might find that 85% more users are willing to sign up for a newsletter held up by a cartoon mouse than one emerging from a boa constrictor's coils. When A/B testing is used, a third or even fourth version of the page is often included in the test, which is sometimes called an A/B/C/D (or A/B...n) test. This means that traffic to the site must be split into thirds or fourths, with a lesser percentage of visitors visiting each site.
Simple in concept and design, A/B testing is a powerful and widely used method.
Keeping the number of tracked variables small means these tests can deliver reliable data quickly, as they do not require a large amount of traffic. This is especially helpful if your site has a small number of daily visitors. Splitting traffic into more than three or four segments would make it hard to finish a test. In fact, A/B testing is so speedy and easy to interpret that some large sites use it as their primary testing method, running cycles of tests one after another rather than more complex multivariate tests.
A/B testing is also an excellent way to introduce the concept of optimization through testing to a skeptical team, as it can quickly demonstrate the quantifiable impact of a simple design change.
A/B testing is a versatile tool. When paired with smart experiment design and a commitment to iterative cycles of testing and redesign, it can help you improve your site. However, remember that the limitations of this kind of test are summed up in the name. A/B testing is best used to measure the impact of two to four variables on interactions with the page. Tests with more variables take longer to run, and in and of itself, A/B testing will not reveal any information about the interaction between variables on a single page.
If you need information about how many different elements interact with one another, multivariate testing is the optimal approach!
Multivariate testing (MVT)
Multivariate testing uses the exact core mechanism as A/B testing but compares a higher number of variables and reveals more information about how these variables interact with one another. Think of it as multiple A/B tests layered on top of each other.
As in an A/B test, traffic to a page is split between different design versions. Then, a multivariate test aims to measure the effectiveness each design combination has on the ultimate goal.
Once a site has received enough traffic to run the test, the data from each variation is compared to find the most successful design and potentially reveal which elements have the most significant positive or negative impact on a visitor's interaction.
The most commonly cited example of multivariate testing is a page on which several elements are up for debate. For example, a page includes a sign-up form, some catchy header text, and a footer.
To run a multivariate test on this page, rather than creating a radically different design as in A/B testing, you might create two different lengths of the sign-up form, three different headlines, and two footers. Next, you would funnel visitors to all possible combinations of these elements.
Testing all possible combinations of a multivariate test is also known as full factorial testing. It is one of the reasons why multivariate testing is often recommended only for sites with a substantial amount of daily traffic—the more variations that need to be tested, the longer it takes to obtain meaningful data from the test. It is, however, the most accurate way to run a multivariate test. This is the method Optimizely Experimentation uses for multivariate testing.
Some testing platforms use the Taguchi method (fractional factorials). In layman's terms, this does not test all possible variations. But by looking at the differences in results from the variations it does test, it infers the best-predicted experience even if that was not a variation that was actually tested. This method requires less traffic than full-factorial testing, but you may need follow-up tests to confirm that the "predicted best" variation is the best variation.
No matter which method is used, after the test has been run, the variables on each page variation are compared and their performance in the context of different test versions. A clear picture of which page is best performing and which elements are most responsible for this performance emerges. For example, varying a page footer may have minimal effect on the page's performance, while running the length of the sign-up form has a huge impact.
Multivariate testing is a powerful way to help you target redesign efforts to the elements of your page where they will have the most impact. This is especially useful when designing landing page campaigns. For example, the data on the effect of a particular element's design can be applied to future campaigns, even if the element's context has changed.
The most significant limitation of multivariate testing is the amount of traffic needed to complete the test. Since all experiments are fully factorial, too many changing elements at once can quickly add up to many possible combinations that must be tested. Even a site with reasonably high traffic might have trouble completing a test with more than 25 combinations in a feasible amount of time.
When using multivariate tests, it is also essential to consider how they will fit into your testing cycle and redesign. Even when you are armed with information about the impact of a particular element, you may want to do additional A/B testing cycles to explore other radically different ideas. Also, sometimes it may not be worth the extra time necessary to run a full multivariate test when several well-designed A/B tests will do the job well.
Multi-page funnel testing
Multi-page (also known as "funnel") testing is similar to A/B Testing except that rather than making variations to a single page, the changes you make are implemented consistently over several pages. Like A/B testing, site visitors of a multi-page test are bucketed into one version. By tracking how these visitors interact with the different pages they are shown, you can determine which design style is most effective. The key to getting usable data in a multi-page test is keeping users from seeing a mix and match of variations instead of seeing a consistent variation throughout a set of pages. This allows one variation to be pretty tested against another.
Testing different design directions against one another can quickly be done using multi-page testing. For example, imagine an ecommerce website that allows users to search through numerous products, add desired items to a virtual shopping cart, and then purchase the items.
In this case, users see more than a single page. Instead, they are funneled through several pages before finally making a purchase or leaving the website. Using a multi-page test, you can create two (or more) unique designs for a set of pages. Once doing this, you must ensure that your users have a consistent experience of seeing only one design style throughout all the pages rather than mixing and matching different design variations.
After enough visitors have been funneled through the different designs, the effect of the other design styles can be compared quickly and effectively.
Like A/B testing, multi-page testing is simple and can provide meaningful and reliable data with speed and ease. The advantage of multi-page testing lies in creating a consistent experience for the user. It allows all users to see a consistent set of pages, whether the original or a redesigned variation.
Multi-page testing allows you to implement the same changes you make on a single page in a typical A/B test. Instead, it applies them to several pages to ensure that your web page visitors do not get bounced around between different variations and designs when funneling through your website.
Multi-page testing is a versatile and effective tool but has many of the same limitations as A/B testing. Like A/B testing, multi-page testing is best used to measure the impact of only a few variables at a time. Tests with too many variables take longer to run; it will also be more challenging to determine the effect of each change you make to each page.
In addition to these limitations, there are also a few limitations specific to just multi-page testing. When setting up a multi-page test, you must have the same number of variations for every page part of the experiment. An uneven number of variations would create inconsistency between pages, lessen the user's experience, and make any data collected challenging to interpret. Additionally, only targeting conditions that apply to all pages in the experiment can be used for multi-page experiments.