- Optimizely Feature Experimentation
- Optimizely Web Experimentation
- Optimizely Personalization
Quality assurance (QA) testing is the last step in creating an experiment. A rigorous testing process ensures that your experiment looks and works the way you want before you show it to visitors. When you finish building your experiment, take time to verify your configuration so you can trust your results.
Prepare materials and resources
Before starting your testing process, gather the following information. Doing so, lets you run a smooth testing process.
Create QA test checklist
Use your basic experiment plan as a template to outline your QA checklist. Major QA topics your checklist should cover include the following:
- All new functionality to verify.
- All events, goals, and analytics to be captured.
- All user flows into the experiment.
- Verify that the new experiment does not break existing functionality.
Who is involved
Next, identify who is involved with the testing process. Different types of experiments require different types of people and resources. An experiment in Feature Experimentation or a Web experiment with custom code may require developer resources, whereas changes made with the Web Visual Editor may not.
The following is an example list of people types:
- Developer or someone with technical skills.
- Experiment implementer.
- Power user.
- Optimizely Web Experimentation or Optimizely Feature Experimentation practitioner.
- A testing team.
Deadlines
When you set your experiment deadlines, give yourself enough time to go through the testing process fully before launch. Catching issues early helps avoid broken experiments, skewed data, and unintentional impact on visitors.
- Plan time for the testing process.
- Sync with the testing team to understand the time required.
- Allocate time and resources accordingly if issues with the experiment are discovered.
Ultimately, the most effective testing process depends on the processes that are in place in your organization. Use this article to build rigor into your test steps and integrate your testing process with your company practices.
Environments
You should use separate development (staging) and production environments. Separate environments reduce the risk of accidentally launching an unfinished or unverified experiment to your live website or app. Use your staging environment to configure your experiment and perform initial QA without exposing the experiment to visitors. Then replicate the experiment into your production environment, and perform the same QA process there.
Every Feature Experimentation project starts with the following two default environments:
- Production – Primary environment.
- Development – Secondary environment.
For information on different environments in Web Experimentation, see Configure projects for development and production environments.
The following are example QA steps:
- Verify that components in the production environment match the staging environment.
- Verify that the experiment functions correctly in staging.
- Verify all experiment changes do not impact site functionality.
- Run QA steps in the staging environment.
- Run QA steps in the production environment.
QA tools and resources
You can use the following tools to help you test your Experimentation experiments before you release them.
Web Experimentation
See Test Optimizely Web Experimentation for additional information.
- Optimizely Experimentation Assistant Chrome extension – Gives you a quick overview of active pages and your bucketing for active experiment and variation.
- Preview – Lets you check visual changes and functionality without publishing your experiment. The preview tool lets you view all the experiments and campaigns on any page on your site, whether it is unpublished or live to visitors. You can check how your variations and experiences display to different audiences, and verify that events are firing correctly.
- Share Link – Lets you share unpublished variations and experiences with internal stakeholders using a link or QR code.
- Test cookie – Helps you test a running experiment and share it with internal stakeholders, without exposing it to your visitors.
-
Force variation parameter (like
?optimizely_x=VARIATIONID) – Shows data from live experiments. Data from draft and paused experiments are excluded from Web Experimentation by default. To preview draft or paused experiments, you can add&optimizely_token=PUBLICto the force variation parameter above, or use the Share Link or the Preview Tool. - JavaScript API – Lets you check what live experiments and campaigns are running on a page and into which variation you are bucketed.
- Network console – Helps you verify whether events in a live experiment or campaign are firing correctly. Use it to check that metrics are tracked correctly on your Optimizely Experiment Results page.
- Optimizely Experimentation log – Helps you diagnose more difficult issues in a live experiment. It tells you whether an experiment activates on the page, whether you qualify for an audience condition, and whether changes on a page are applied.
- Summary – Provides a high-level summary of the experiment or campaign settings. You can download and share this summary with stakeholders for easier reporting on the experiments or campaigns you are running.
Personalization
- Preview – Lets you check visual changes and functionality without publishing your campaign. The preview tool lets you view all the experiments and campaigns on any page on your site, whether it is unpublished or live to visitors. You can check how your variations and experiences display to different audiences, and verify that events are firing correctly.
- Share Link – Lets you share unpublished variations and experiences with internal stakeholders using a link or QR code.
- Test cookie – Helps you test a running experiment and share it with internal stakeholders, without exposing it to your visitors.
-
Force variation parameter (like
?optimizely_x=VARIATIONID) – Shows data from live experiments. Data from draft and paused experiments are excluded from Web Experimentation by default. To preview draft or paused experiments, you can add&optimizely_token=PUBLICto the force variation parameter above, or use the Share Link or the Preview Tool. - Optimizely Experimentation log – Helps you diagnose more difficult issues in a live campaign. It tells you whether a campaign activates on the page, whether you qualify for an audience condition, and whether changes on a page are applied.
- Summary – Provides a high-level summary of the campaign settings. You can download and share this summary with stakeholders for easier reporting on the campaigns you are running.
Feature Experimentation
See Choose QA tests for Feature Experimentation for an overview and comparison of the following QA tools:
- Allowlist – Lets you run your experiment in any environment and shows a specific variation to up to 50 users that you have selected.
- QA audience – Uses a custom audience to test a running experiment and share it with internal stakeholders, without exposing it to your visitors.
- Forced bucketing – Lets you programmatically put a user into a flag variation from within your code.
Deliverables
When you finish your QA testing, you should have the following:
- A fully built experiment in a production environment.
- A completed QA and use case document, with all cases marked as pass.
Example QA checklist
The QA checklist that you created as part of your experiment plan will help you perform a thorough check. Your checklist should include the following:
- All goals that were added, as well as how each is triggered.
- All functionality that has been added (for example, a new button).
- Every visitor use case, including all expected user flows to and from the page.
- Audiences that are eligible and ineligible to see the experiment.
- URLs where the experiment should and should not run.
- Sample workflow to fire a goal (especially for custom events).
| Audience visitor path | Eligible for the experiment? | Pass / Fail |
| A visitor from Bing | Eligible for the experiment | |
| A visitor from Yahoo! | Eligible for the experiment | |
| A visitor from Google | Eligible for the experiment | |
| A visitor who clicks a paid ad | Not eligible for the experiment | |
| A visitor who clicks an email link | Not eligible for the experiment | |
| Experiment Metrics | Location / Behavior to Trigger? | Pass / Fail |
| Click on CTA | Click on the 'Learn More' Button in the hero image on example.com | |
| View Checkout Page | navigate to URL example.com/checkout | |
| View 3 Card Promotion | Scroll 60% of the way down the page on URL example.com/promotions | |
| Track revenue | On all Order Confirmation Pages confirm value sent is correct |
After your QA team verifies every item on your QA checklist, you are ready to launch the test to visitors.
Please sign in to leave a comment.