Test Optimizely Experimentation

  • Updated
  • Optimizely Feature Experimentation
  • Optimizely Web Experimentation
  • Optimizely Personalization

Quality assurance (QA) testing is the last step in creating an experiment. A rigorous testing process ensures that your experiment looks and works the way you want before you show it to visitors. When you finish building your experiment, take time to verify your setup so you can trust your results.

Prepare materials and resources

You can use Create a basic experiment plan as a template to outline the QA test checklist. Major topics the checklist should cover are:

  • All new functionality to check

  • All goals and analytics to be captured

  • All user flows into the experiment

  • Verify that the new experiment does not break existing functionality

Next, identify who will be involved with the testing process. Different types of experiments require different types of people and resources. An experiment with custom code may require developer resources, whereas changes made with the Visual Editor may not.

Consider the following list of people types:

  • Developer or technical skills

  • Experiment implementer

  • Power user

  • Optimizely Web Experimentation or Optimizely Feature Experimentation practitioner 

  • QA testing team 

When you set your experiment deadlines, give yourself enough time to go through the QA process fully before launch. Catching issues early helps avoid broken experiments, skewed data, and unintentional impact on visitors. 

  • Plan time for the testing process

  • Synchronize with the testing team to understand the time required

  • Allocate time and resources accordingly if issues with the experiment are discovered

Ultimately, the most effective testing process depends on the processes that are in place in your organization. Use the framework in this article to build rigor into your test steps and integrate your testing process with company practices.

Environments

Many mature optimization teams have access to separate staging and production environments. Separate environments reduce the risk of accidentally launching an unfinished or unverified experiment to the live site.

If you have separate environments, use a staging environment to set up the experiment and perform initial QA without exposing the experiment to visitors. Then replicate the experiment into your production environment project, and perform the same QA process there. When possible, ensure that your production and staging environments match so experiments function the same way in both.

If you are using environments, consider the following steps when building a QA checklist:

  • Verify that components in the production environment match the staging environment.
  • Verify that the experiment functions correctly in staging.
  • Verify all experiment changes do not impact site functionality.
  • Run QA steps in the staging environment.
  • Run QA steps in the production environment.

206112388.png

The environments feature builds experiments in one project but are allocated to either a development or production environment. There is no duplication or rebuilding required.

If you do not have a dedicated staging environment, you can still test your experiments. Follow the steps outlined below, but take special care when you use the test cookie to QA in a live environment.

QA tools and resources

  • Optimizely Experimentation Chrome extension is a must-have if you do QA regularly. This tool gives you a quick read of active pages, and your bucketing for active experiment and variation. The information this extension provides can substantially shorten your time spent on QA. 
  • The preview tool is your first line of defense in Optimizely Web Experimentation. Use it to check visual changes and functionality without publishing your experiment or campaign. The preview tool lets you view all the experiments and campaigns on any page on your site, whether it's unpublished or live to visitors. You can check how your variations and experiences appear to different audiences, and verify that events are firing correctly.
  • The share link feature helps you share specific variations with internal stakeholders.
  • The test cookie feature also helps you test a running experiment and share it with internal stakeholders, without exposing it to your visitors.
  • The force variation parameter like ?optimizely_x=VARIATIONID only shows data from live experiments. Data from draft and paused experiments are excluded from Optimizely Web Experimentation by default. To preview draft or paused experiments, you can add&optimizely_token=PUBLIC to the force variation parameter above, or use the Share Link or the Preview Tool.
  • The JavaScript API lets you check what live experiments and campaigns are running on a page and into which variation you are bucketed.
  • The network console helps you verify whether events in a live experiment or campaign are firing correctly. Use it to check that metrics are tracked correctly on your Results page.
  • The Optimizely Web Experimentation log helps you diagnose more difficult issues in a live experiment or campaign. It tells you whether an experiment or campaign activates on the page, whether you qualify for an audience condition, and whether changes on a page are applied.

Basic QA steps before launch

You will test variation code, targeting, and activation and metrics. These steps require the use of the Optimizely Web Experimentation Preview Tool. 

Test variation code

Does your variation look the way it should? How your experiment is presented has an impact on the results, so make sure you have it the way you want it.

As you built your experiment, the Visual Editor may actively respond to your changes. You can use Optimizely Web Experimentation's Preview tool to run the changes on the webpage outside the Visual Editor environment.    

Consider the following points for your QA checklist.

  • Do the changes you made appear on the page?
  • Are the changes appearing where expected?
  • Does dynamic content still work as expected?
  • Does the variation work in desktop, mobile and tablet views?
  • Do the changes work across different browsers?
  • Do you observe a flash?

To troubleshoot, see Cannot see the variation.

Test targeting and activation

Targeting defines where the experiment runs. To ensure the experiment activates on the appropriate URLs, open the preview tool to the Currently Viewing tab. If your experiment is activated, you see the name of the campaign, the name of the experiment, and the name of the active page listed there.  

If you run a multi-page experiment, go to each URL and ensure that the page activates in the preview tool. The preview tool is persistent, so go to and from URLs included in your experiment's targeting to ensure that the experiment activates and deactivates as expected. 

Consider the following points for your QA checklist.

  • Does each URL included in targeting activate the experiment?
  • Do URLs not included in targeting activate the experiment?
  • If you are using conditional activation, does the Page activate 'when' expected?
  • If using support for dynamic websites, are pages deactivated when conditions are false?
  • Are experiment changes removed when the page is deactivated?

To troubleshoot, see Manage experiments and campaigns.

Test metrics

Before your experiment runs, you can use the preview tool to confirm your events are firing as expected.

The event feed references the saved instance of Optimizely Web Experimentation and not the snippet that is published on the CDN. 

Events that are not published or attached to the experiment can be triggered in the feed. Keep in mind that events triggered during preview are not recorded on the results page, even if the experiment is live.  

Consider the following points for your QA checklist.

  • Does the click event attached to an element convert when clicked on?
  • If multiple elements are targeted by a single click event does each one trigger a conversion?
  • Are events triggered on elements that appear after a visitor takes a certain action?
  • Are events triggered on elements that appear late on the page like a modal or pop-up?
  • If you are working with a redirect experiment, have you set up a hybrid page to track events on both the original and redirected URLs?
  • If using custom events, is the code calling the API working as expected
  • For events attached to form submissions, are events fired when a visitor clicks the button or presses Return.
  • If there are errors from an incomplete form, should the event fire?

To troubleshoot, see Metrics do not track correctly.

An easy way to share your variation designs with stakeholders is to send a share link. However, visitors who enter through the force variation see only the variation you selected on the specified page; they cannot navigate through multi-page experiments. If you make changes to multiple pages, use the appropriate link or QR code of that specific page. 

Advanced QA steps

After your experiment passes the basic QA checks, you should have a final round of testing with the experiment running live. If you run the experiment in either a development environment (with no live traffic) or in a production environment with a test cookie approach, you can prevent visitors from inadvertently seeing your experiment before you are satisfied it is ready. 

Running an experiment live is an essential step for testing audience conditions, which the preview tool does not address. The test cookie approach gives you the power to view the experiment as your visitors would trigger events that will show up on the results page, and expose issues (for example, timing) that might otherwise slip under the radar.  

If your company does not allow adding cookies to staging or production environments, you can still create heavy audience restrictions that prevent you from showing the experiment to visitors while working through the QA process.

When you test on a live running experiment, you should use a new incognito window for each evaluation. Optimizely Web Experimentation tracks visitors by storing data in the browser's local storage. To ensure that there is no previous session data, and no prior bucketing influencing your QA evaluation of the experiment, use a new incognito window to give you a clean slate.

Advanced event check

There may be times when you want to confirm both that an event is triggered, and that the results page is receiving it. In the case of revenue or non-binary events, you should confirm that the correct value is passed in the API call. For these situations when the experiment is running, you can check if an event is firing live. Use the network events to check the values sent to Optimizely Web Experimentation as they happen.

  • Open a new incognito or private browsing window, and go to the pages you want to test and set a test cookie.
  • Go to the Network tab of the developer console.
  • Perform the action that you expect to fire the goal in Optimizely Web Experimentation. Look for the event corresponding to that goal in the Network tab to see if it appears when you expect it to.
  • Click all the element/area that should trigger a click event.
  • Go to URLs tracking pageview events.
  • If the action involves moving from one page to another, select Preserve log in the network tab to track the network call across pages.
  • If triggering non-binary events, check that the event has the correct value passed in the network.
  • On revenue specific metrics if multiple currencies are accepted are they converted correctly.
  • Check your Results page and your analytics integration to ensure that data is captured correctly.

Activation and bucketing issues

Sometimes the QA process discovers a component of the experiment that is not working as expected. In these cases, the JavaScript API and the Optimizely Web Experimentation Log can be helpful in diagnosing where and why an issue exists.  

Instances where the log and JavaScript API can lend themselves to resolving issues:

  • If an experiment is part of an exclusion group are visitors excluded as expected.
  • Identifying if a visitor is in the holdback or not.
  • Explaining why an experiment is not activating when expected.
  • Looking at if an audience condition is succeeding or failing for a specific experiment.
  • Manually activating pages or manually sending events.

Analytics and other third-party platforms

The best time to confirm that analytics platforms like Google and Adobe are receiving data with the correct experiment and variation information is when the experiment is running live. Optimizely Web Experimentation integration logic is not run in preview mode; instead, Optimizely Web Experimentation only evaluates and passes the information on to analytics platforms if the experiment is in an active state.

For the checklist the following items should be included:

  • Expected experiment and variation IDs are passed to analytics.
  • Analytics network event contains the experiment and variation information.
  • Analytics integration captures data within a custom report.
  • Analytics integration data aligns with what is expected.
  • Failure in audience or URL targeting is reflected in the integration setup.
  • If traffic allocation is less than 100%, is the holdback passed to analytics as expected.

Deliverables

When you finish your QA testing, you should have the following:

  • A fully built experiment in a production environment.
  • A completed QA and use case document, with all cases marked pass.

Example QA checklist

The QA checklist that you created as part of your experiment plan will help you perform a thorough check. Your checklist should include:

  • All goals that were added, as well as how each is triggered.
  • All functionality that’s been added (for example, a new button).
  • Every visitor use case, including all expected user flows to and from the page.
  • Audiences that are eligible and ineligible to see the experiment.
  • URLs where the experiment should and should not run.
  • Sample workflow to fire a goal (especially for custom events).
Audience visitor path Eligible for the experiment? Pass / Fail

A visitor from Bing

Eligible for the experiment

 

A visitor from Yahoo!

Eligible for the experiment

 

A visitor from Google

Eligible for the experiment

 

A visitor who clicks a paid ad

Not eligible for the experiment

 

A visitor who clicks an email link

Not eligible for the experiment

 
Experiment Metrics Location / Behavior to Trigger? Pass / Fail

Click on CTA

Click on the 'Learn More' Button in the hero image on example.com

 

View Checkout Page

navigate to URL example.com/checkout

 

View 3 Card Promotion

Scroll 60% of the way down the page on URL example.com/promotions

 

Track revenue

On all Order Confirmation Pages confirm value sent is correct

 

After your QA team verifies every item on your QA checklist, you are ready to launch the test to visitors. Remove the test cookie audience condition, and then click Publish