Launch and monitor your experiment

  • Updated
  • Optimizely Web Experimentation
  • Optimizely Personalization
  • Optimizely Performance Edge
  • Optimizely Feature Experimentation
  • Optimizely Full Stack (Legacy)
This article is part of The Optimization Methodology series.

When you launch a test, you may want to monitor its progress to implement a winning variation as soon as possible or to stop an experiment that is losing.

Optimizely's Stats Engine lets you look at running tests whenever you want. Unlike in statistics where monitoring your test increases the chance of calling a false positive, results in Stats Engine are always valid. This can help you monitor the health of your experiments.

Testing should be treated as a standardized, scalable process; not a project you are always monitoring. Check each test after launch at least once to ensure you do not see any technical issues. When you determine that your test is running properly, let it run until the projected time-to-completion you predicted in your experiment plan.

Materials to prepare

People and resources

  • Program manager
  • Executive sponsorship

Actions you will perform

  • Share the experiment plan with QA, marketing, Support, and development stakeholders
  • Launch the experiment
  • Monitor results
    • Optimizely Experimentation results with a focus on the significance and lift parameters you set in your test plan
    • Performance for different audience segments
    • Third-party integrations

Deliverables

  • Launched experiment
  • Notification to the broader team of the test launch, including the test scope

What to watch out for

  • A lack of clarity in how Optimizely Experimentation tracks data, especially about conflicting results between analytics systems

Notice health metrics

When you monitor a running experiment, look for issues that may prompt you to stop the test. When you launch, let stakeholders, including your QA team and the team in charge of your production environment, know that the experiment is live. Share the experiment plan.

Review your results page a few days after launch to let at least 5% of your proposed sample size to see your variations.

Review the following metrics to ensure that your goals track properly and variations perform within the expected range based on past results.

Check if the following goals are within the expected range for your experiment:

  • Revenue
  • Form submissions
  • Primary goal
  • Monitoring goals
  • Data in third-party integrations
  • All other metrics important to your business

If something looks wrong, ensure your experiment is set up correctly. The standard QA process is a good place to start.

Results may fluctuate and conversions may drop before your test reaches statistical significance. This variance across time is due to chance. Stats Engine helps you reliably predict future outcomes using a certain statistical standard. 

Check your difference interval to know whether a drop in conversions is real. A combination of conversion drops with a narrowing difference interval suggests that this variation will convert at a lower conversion rate in the future with increasing precision. If the scenario continues for days, consider pausing the experiment to assess the impact of continued losses in conversions and take a moment to evaluate why the variation is losing. The results of this test can help you design better future experiments.

If the conversion rate drops but the difference interval is widening, Stats Engine is projecting a wider range of statistically likely outcomes—it is less sure what the conversion rate would be if the test ran again. If the difference interval straddles zero, Stats Engine has not yet found a statistically significant difference in your conversion rates.

If you have the resources, keep running this experiment to achieve more precise results.

Historical experience can also provide important insight into what to expect from your results.

Segment your visitors

If your experiment is healthy overall, look at your different visitor segments. Check that conversions have not crashed for your most valuable customers.

In Optimizely Web Experimentation, you can segment by the following attributes:

  • Browser type
  • Source type
  • Device type (mobile)
  • Campaign source types
  • Custom segments for audiences that are important for your business

Unexpected results are part of why you test. The conversion drop may be due to chance, as described in the note above, or it may be an opportunity to learn from the test.

A steep drop in conversions for just one segment may indicate that you are delivering a bad or dissonant experience for certain visitors. If that segment of visitors is highly valuable to your business, you may not want that variation.

For example, returning visitors drive conversions on your site because it takes several visits for customers to feel comfortable purchasing. You segment new versus returning visitors and find a steep drop in conversions by returning visitors. Investigate if there is a technical issue or if you are serving a bad experience for that segment.

Pause and re-launch a running experiment that is broken or provides an unacceptable experience to customers.

If your key metrics and visitor segments convert as expected, let the experiment run according to your test plan.

Pause and re-launch a running experiment

To re-launch a broken experiment or one delivering a bad experience, pause the test in Optimizely Web Experimentation.

  1. Select the experiment or campaign.

  2. Click the ... icon, and select Pause. When you pause an experiment, visitors cannot see any of the variations.
    PauseExperiment.png

  3. Duplicate the experiment to create an experiment with the same variations and goals to revise the design. If you revise and restart the original experiment, visitors who saw the first experience remain bucketed and see the same experience when they return (unless they delete their cookies).

  4. Use the Visual Editor to remove the variation you want to exclude and add or modify any other variations.

  5. Run the new test. Visitors are re-bucketed.

You can now let your test run until the time to completion projected by your test plan. When you reach that time, you are ready to interpret and take action on your results.