Test tips for low-traffic sites

  • Updated
  • Optimizely Web Experimentation
  • Optimizely Performance Edge
  • Optimizely Feature Experimentation
  • Optimizely Full Stack (Legacy)

When traffic is a precious commodity, it makes sense to implement your A/B tests with as much forethought as possible. This article discusses getting the most from Optimizely Experimentation, even if your website or application receives relatively low visitor traffic or conversions.

See How long to run an experiment for how sample size affects experiment length.

The challenges of low-traffic

Your experiment's goals determine the metrics for whether a variation is a winner or loser. The most common goals include completing a sign-up form or placing an order. However, if your website has only a few conversions a week, determining the winner takes more time because the difference in conversion rate between the original and the variation is small.

Imagine you run two A/B tests, and each has received 500 visitors and a few conversions on the primary metric. Test 1 changes a sign-up button from Buy now to Try it! Test 2 adds a modal advertising free shipping. Test 1's conversion rate improvement is 5%. Test 2's conversion rate improvement is 50%.

Use the A/B test sample size calculator to check how many visitors are needed in each test branch to prove that the results have reached statistical significance.

The difference between proving an uplift of 5% and 50% is major. Testing macro conversions, such as order completions, risks requiring more time to reach the required number of visitors or conversions, making it unrealistic for a website with lower traffic.

Visitors required per variation to prove a 5% uplift:

a-b-test-sample-size-calculator.png

Visitors required per variation to prove a 50% uplift:

a-b-test-sample-size-50.png

Although low-traffic websites can still conduct A/B tests, use strategy to determine which tests to run. The following recommendations help you get the best value from Optimizely Experimentation and A/B testing.

Maximize the value of A/B testing

Test high-impact changes

Small changes can greatly impact conversion rates, but testing something big may have a more noticeable effect. Focusing your tests around areas of your site or application that visitors consider important can impact conversion rates more than testing small modifications on niche pages.

While it might seem intimidating when you test high-impact changes, the likelihood of achieving a drastic difference in conversion rates and statistically significant results within a reasonable timeframe increases. A losing variation can also show valuable insights about your customers' values and behavior to inform future tests.

However, if you change many things at once, you may be unable to attribute the conversion rate change to one element on the page. You can test different themes instead of randomly moving elements on the page. If you learn what is important to your customers, you can apply this knowledge elsewhere on the site or in your application.

Focus on micro-conversions

If your main goal is to increase conversions or sign-ups, positioning them as your primary metric may not be beneficial because verifying the results could take a lot of time. Testing conversions on the micro level can help you determine the immediate effect that an A/B test has on a page and helps you call your results more quickly. Micro conversions include engagement with the page, clicking Add to Cart, viewing a certain number of pages, or clicking a product detail page.

You may also consider setting a conversion goal for when a visitor scrolls a certain percentage on your page or a custom event goal for users who stay 30 seconds or longer on your site.

Test the page directly

Measure changes directly on the page with the running experiment rather than measuring final conversions a few pages ahead. For example, to A/B test your product pages without enough conversions to set up an order confirmation goal, test an earlier step to learn more about your users.

Consider a lower statistical significance setting

Optimizely Experimentation lets you change your statistical significance level for each project.  Statistical significance measures how unusual your experiment results would be if there were no difference between your variation and baseline and the difference in lift was from random chance. 

  • A high statistical significance will ultimately declare fewer false positives, but Optimizely Experimentation will generally take longer to declare results.

  • Lowering your statistical significance increases the likelihood of false positives, but you can also run experiments at a higher velocity because Optimizely Experimentation requires a smaller sample size.

See articles on statistical significance and statistical significance settings to learn more.

Avoid multivariate testing

You should avoid multivariate testing. Testing multiple variations requires more traffic, and results take longer to reach statistical significance. Use A/B tests until you have sufficient traffic to direct to multiple variations.

Avoid niche testing

Avoid testing areas of your site or applications that get few hits. Instead, widen your targeting conditions to include as many visitors as feasible. Tests like Site-wide banner tests and landing page tests take advantage of your current traffic. They are more likely to reach statistical significance in a shorter period than if you were to test only one specific product page, for instance.

Test to improve SEO and user experience

To improve your site's SEO, you can set up experiments to learn which actions have the biggest impact. 

To determine which search terms cause the most users to take action, set up individual tests that target individual AdWords parameters in your site's URL or look for a specific search term in the document.referrer. Alternatively, set up various CTAs in an email campaign to lead to different URLs and track conversions for each URL. When you determine the best search terms, you can better target your SEO and save time and money.

You can also test whether users who have clicked a CTA in an email or AdWords listing are more likely to convert when greeted with the same message than visitors with a standard message. Use a standard A/B test with the original message and a variant with a personalized message to discover this and personalize your users' experience.