- Optimizely Web Experimentation
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack (Legacy)
After prioritizing a list of optimization ideas, your next step is to implement them. A basic experiment plan can help you scope and launch individual experiments. You estimate the cost of a test in terms of traffic and time and compare that to its potential value.
You should determine:
-
Why you are running this experiment (hypothesis)
-
When and where your variations run
-
Who you want to see this experiment
-
What changes your variation makes
-
How you measure success
Use this article to design and implement individual ideas and communicate with stakeholders.
- Experiment hypothesis
- Business goals
- Variation descriptions (wireframes or screenshots)
- Summary of all technical and design assets needed for the experiment
- Parameters for significance and lift that indicate the change will be implemented permanently
People and resources
- Program manager
- Developer
- QA team
- Designer
- Sample size calculator
Actions to perform
- Create a test plan document.
- Create a QA checklist.
- Review and update the plan with stakeholders.
- Confirm the scope of the test.
- Define primary, secondary, and monitoring goals.
- Confirm stakeholders who will create required resources.
- Document responsibilities and deadlines (in Kanban, Gantt chart, or other internal method).
- Finalize test plan.
- Test plan document containing:
- All details for building an experiment
- Technical requirements
- Scope of the experiment
- Creative assets or wireframes
- Screenshots of variations
- Ill-defined scope
- Lack of true hypothesis or goals
- Lack of executive buy-in
- Missing screenshots
- Poor understanding of resource needs
- Inaccurate effort estimates
- Inadequate documentation for QA
- Plan not being shared with the proper stakeholders
- Lack of adherence to the experiment plan when building the test
Create a basic experiment plan
Download this template to create your experiment plan. For more resources, see the Optimizely Experimentation Testing Toolkit.
Use this basic plan to manage the project and set expectations for external stakeholders. Establish the intentions of the proposed test for developers and designers who help you execute the plan. Provide all details needed to build and QA the experiment.
The plan also standardizes your testing practice and streamlines the approval process with stakeholders outside of your team. Create a list of QA visitor use cases or how a visitor arrives or navigates through the experiment along with the expected result. Your QA team uses this list to evaluate the experiment before launch.
Create this plan as a shareable document that multiple stakeholders can reference: a presentation slide, an email template, or a wiki page covering basic test information. Strategic planners use this document to communicate the concept to the designers and developers responsible for implementing the experiment.
For maximum visibility, link your test plans to your prioritized list of ideas.
Define primary, secondary, and monitoring goals
When you create your experiment plan, decide how to measure success. In Optimizely Experimentation, the primary goal measures how your changes affect visitor behaviors. Consider setting secondary and monitoring goals to understand customer behaviors better and ensure the lift you see sets your program for success.
To learn more, see Primary metrics, secondary metrics, and monitoring goals.
Decide what type of experiment to run
The type of experiment depends on how you expect your changes to impact your primary conversion event.
The minimum detectable effect (MDE) can help you decide what type of test to run.
- A/B test – Run an A/B test when improvement in your primary goal can be attributed to a single change in your user experience.
- Multivariate test – Create a multivariate test to evaluate each combination of variables against others. This helps measure how multiple changes interact with each other and influence conversions.
- Multi-page test – Create a multi-page test to measure how changes affect visitors through each stage of the funnel. This helps measure success in conversions across a series of pages.
- A/B/n test – Test multiple versions of your page (A, B, and n more pages) without comparing all possible combinations of variations against each other. This test type is more economical, letting you attribute lift to certain changes, without needing to run a full multivariate test.
Please sign in to leave a comment.