Before launching your Web Experimentation experiment, you should test (QA) your variation code, targeting, activation, and metrics. For more information about testing at Optimizely, see the following documentation:
- Test Optimizely Experimentation
- Create a basic experiment plan
- Create an advanced experiment plan and QA checklist
Test variation code
Does your variation look the way it should? How your experiment is presented has an impact on the results, so make sure you have it the way you want it.
As you built your experiment, the Visual Editor may actively respond to your changes. You can use Optimizely Web Experimentation's Preview tool to run the changes on the webpage outside the Visual Editor environment.
Consider the following points for your QA checklist.
- Do the changes you made appear on the page?
- Are the changes appearing where expected?
- Does dynamic content still work as expected?
- Does the variation work in desktop, mobile and tablet views?
- Do the changes work across different browsers?
- Do you observe a flash?
To troubleshoot, see Cannot see the variation.
Test targeting and activation
Targeting defines where the experiment runs. To ensure the experiment activates on the appropriate URLs, open the preview tool to the Currently Viewing tab. If your experiment is activated, you see the name of the campaign, the name of the experiment, and the name of the active page listed there.
If you run a multi-page experiment, go to each URL and ensure that the page activates in the preview tool. The preview tool is persistent, so go to and from URLs included in your experiment's targeting to ensure that the experiment activates and deactivates as expected.
Consider the following points for your QA checklist.
- Does each URL included in targeting activate the experiment?
- Do URLs not included in targeting activate the experiment?
- If you are using conditional activation, does the Page activate 'when' expected?
- If using support for dynamic websites, are pages deactivated when conditions are false?
- Are experiment changes removed when the page is deactivated?
To troubleshoot, see Manage experiments and campaigns.
Test metrics
Before your experiment runs, you can use the preview tool to confirm your events are firing as expected.
Events that are not published or attached to the experiment can be triggered in the feed. Keep in mind that events triggered during preview are not recorded on the results page, even if the experiment is live.
Consider the following points for your QA checklist.
- Does the click event attached to an element convert when clicked on?
- If multiple elements are targeted by a single click event does each one trigger a conversion?
- Are events triggered on elements that appear after a visitor takes a certain action?
- Are events triggered on elements that appear late on the page like a modal or pop-up?
- If you are working with a redirect experiment, have you set up a hybrid page to track events on both the original and redirected URLs?
- If using custom events, is the code calling the API working as expected
- For events attached to form submissions, are events fired when a visitor clicks the button or presses Return.
- If there are errors from an incomplete form, should the event fire?
To troubleshoot, see Metrics do not track correctly.
Advanced QA steps
After your experiment passes the basic QA checks, you should have a final round of testing with the experiment running live. If you run the experiment in either a development environment (with no live traffic) or in a production environment with a test cookie approach, you can prevent visitors from inadvertently seeing your experiment before you are satisfied it is ready.
Running an experiment live is an essential step for testing audience conditions, which the preview tool does not address. The test cookie approach gives you the power to view the experiment as your visitors would trigger events that will show up on the results page, and expose issues (for example, timing) that might otherwise slip under the radar.
If your company does not allow adding cookies to staging or production environments, you can still create heavy audience restrictions that prevent you from showing the experiment to visitors while working through the QA process.
Advanced event check
There may be times when you want to confirm both that an event is triggered, and that the results page is receiving it. In the case of revenue or non-binary events, you should confirm that the correct value is passed in the API call. For these situations when the experiment is running, you can check if an event is firing live. Use the network events to check the values sent to Optimizely Web Experimentation as they happen.
- Open a new incognito or private browsing window, and go to the pages you want to test and set a test cookie.
- Go to the Network tab of the developer console.
- Perform the action that you expect to fire the goal in Optimizely Web Experimentation. Look for the event corresponding to that goal in the Network tab to see if it appears when you expect it to.
- Click all the element/area that should trigger a click event.
- Go to URLs tracking pageview events.
- If the action involves moving from one page to another, select Preserve log in the network tab to track the network call across pages.
- If triggering non-binary events, check that the event has the correct value passed in the network.
- On revenue specific metrics if multiple currencies are accepted are they converted correctly.
- Check your Results page and your analytics integration to ensure that data is captured correctly.
Activation and bucketing issues
Sometimes the QA process discovers a component of the experiment that is not working as expected. In these cases, the JavaScript API and the Optimizely Web Experimentation Log can be helpful in diagnosing where and why an issue exists.
Instances where the log and JavaScript API can lend themselves to resolving issues:
- If an experiment is part of an exclusion group are visitors excluded as expected.
- Identifying if a visitor is in the holdback or not.
- Explaining why an experiment is not activating when expected.
- Looking at if an audience condition is succeeding or failing for a specific experiment.
- Manually activating pages or manually sending events.
Analytics and other third-party platforms
The best time to confirm that analytics platforms like Google and Adobe are receiving data with the correct experiment and variation information is when the experiment is running live. Optimizely Web Experimentation integration logic is not run in preview mode; instead, Optimizely Web Experimentation only evaluates and passes the information on to analytics platforms if the experiment is in an active state.
For the checklist the following items should be included:
- Expected experiment and variation IDs are passed to analytics.
- Analytics network event contains the experiment and variation information.
- Analytics integration captures data within a custom report.
- Analytics integration data aligns with what is expected.
- Failure in audience or URL targeting is reflected in the integration setup.
- If traffic allocation is less than 100%, is the holdback passed to analytics as expected.
Article is closed for comments.