Test tips for low-traffic sites

  • Updated
  • Optimizely Web Experimentation
  • Optimizely Performance Edge
  • Optimizely Feature Experimentation
  • Optimizely Full Stack (Legacy)

When traffic is a precious commodity, it makes sense to implement your A/B tests with as much forethought as possible. This article discusses getting the most from Optimizely Experimentation, even if your website or application receives relatively low visitor traffic or conversions.

In addition to this article, you should read How long to run an experiment, which discusses how sample size affects experiment length.

Why low traffic can pose a challenge

The goals you set for your experiment determine the metrics by which a variation is considered a winner or loser. Popular goals measure significant actions such as completing a sign-up form or placing an order. However, if your website only gets a few such conversions a week, determining which version is the winner will take a long time. This is because the difference in conversion rate between the original and the variation is small.

To illustrate this, imagine you are running two A/B tests, both of which have received 500 visitors to each variation and a handful of conversions each on the primary metric. Test 1 changes the text of a sign-up button from Buy now to Try it! Test 2 adds a pop-up advertising free shipping. For Test 1, the improvement in conversion rate for the variation is 5%. For Test 2, the improvement in conversion rate is 50%.

Use the A/B test sample size calculator to check how many visitors would be required in each test branch to prove that the results have reached statistical significance and that the uplift is not just pure chance.

As illustrated in the screenshots below, the difference between proving an uplift of 5% and an uplift of 50% is enormous. What this means in practice is that testing macro conversions, such as order completions, runs the risk of requiring so much time to reach the required number of visitors or conversions as to make it unrealistic for a website with lower traffic.

Visitors required per variation to prove 5% uplift:


Visitors required per variation to prove 50% uplift:


This is not to say that low-traffic websites should not conduct A/B tests, but rather that strategy should play a greater role in deciding which tests to run than if you had unlimited traffic to play with. The following recommendations will help websites with lower traffic get the best value from Optimizely Experimentation and A/B testing in general.

What sites can do with low traffic do to maximize the value of A/B testing

Test high-impact changes

Sometimes small changes have a big impact on conversion rates, but it is more likely that testing something big will have a more noticeable effect. Focusing your tests around areas of your site or application that visitors consider important are more likely to have a big impact on conversion rate than testing very small modifications on niche pages.

While it might seem intimidating to test out something radical, when you test high-impact changes, the likelihood of achieving a drastic difference in conversion rates increases and with it the chance of being able to achieve statistically significant results within a reasonable timeframe. Even a losing variation can result in valuable insights about your customers' values and behavior, which can be used to inform future tests.

The only drawback to testing very radical changes is that if you change many things at once, it may be difficult to attribute the change in conversion rate to one specific element on the page. This risk can be mitigated by testing different themes rather than randomly moving elements around on the page. If you can figure out what is important to your customers, you can apply these learnings elsewhere on the site or in your application.

Focus on micro-conversions

Your main goal may be to increase conversions or sign-ups, but does it really make sense to position them as your primary metric if gathering conversions would take an extremely long time  to verify the results you collected? Probably not.

Testing conversions on the micro level, at which conversions are more plentiful, can help you determine the immediate effect that an A/B test has on a page and helps you call your results more quickly. Examples of micro conversions would be engagement with the page, clicking on an add to cart button, viewing a certain number of pages, or clicking through to a product detail page.

Other goals to consider might be setting up a conversion goal that fires when a visitor has scrolled a certain percentage of the way down your long-copy page or a custom event goal that fires only for users who stayed 30 seconds or longer on your site. 

Test the page directly

Instead of measuring final conversions that take place several pages away, measure changes directly on the page where the experiment is running. For example, if you want to A/B test your product pages but do not have enough conversions to make setting up an order confirmation goal worthwhile, instead of testing the product pages to bring more relevant traffic to the next step of the process, testing an earlier step in the process will give you more freedom to test and help you learn more about your users each step of the way.

Consider a lower statistical significance setting

Optimizely Experimentation lets you change your statistical significance level for each Project.  Statistical significance is a measure of how unusual your experiment results would be if there was actually no difference between your variation and baseline and the difference in lift was due to random chance alone. Our articles on statistical significance and the statistical significance setting cover most of the information you will need in this scenario. Here is the summary:

  • A high statistical significance will ultimately declare fewer false positives, but Optimizely Experimentation will generally take longer to declare results.

  • Lowering your statistical significance increases the likelihood of false positives, but you can also run experiments at a higher velocity because Optimizely Experimentation requires a smaller sample size.

Do not be tempted by multivariate testing

The more variations you test, the more overall traffic required and the longer it will take to get your results to statistical significance. Stick to A/B tests until you have sufficient traffic to direct into multiple variations.

Avoid niche testing

Avoid testing areas of your site or applications that get very few hits, and instead, make your targeting conditions as wide as possible to include as many visitors as feasible. Site-wide banner tests, landing page tests, and the like will take advantage of the traffic you have and are more likely to reach statistical significance in a shorter period than if you were to test only one specific product page, for instance.

Test to improve SEO and user experience

If you are conducting activities to improve your site's SEO, you can set up experiments to determine which actions have the biggest impact.

You may want to determine which search terms are most effective at getting users to click through to the site and take action. You could test this by setting up individual tests that target individual AdWords parameters in your site's URL or which look for a specific search term in the document.referrer. Alternatively, set up various CTAs in an email campaign to lead to different URLs and track conversions for each URL. Which search terms lead to the most conversions or engagement? Which AdWords listings are clicked most often and have the best conversion rates? Which variant of your email campaign had the most significant impact?

Once you determine the search terms you should focus on, you can target your SEO activities towards this, thus saving valuable time and ad spend.

Another interesting option is to test whether users who have clicked on a certain call to action in an email or AdWords listing are more likely to convert when greeted with the same message they clicked on in the email or AdWords listing than visitors greeted with a standard message. A standard A/B test for which the original is the original message and the variant contains a personalized message would be a great way of discovering this and offering a more personalized experience to your website's users.