- Optimizely Web Experimentation
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack (Legacy)
After prioritizing a list of optimization ideas, your next step is to implement them. A basic experiment plan can help you scope and launch individual experiments. You estimate the cost of a test in terms of traffic and time and compare that to its potential value.
You should determine the following:
- Why you are running this experiment (hypothesis).
- When and where your variations run.
- Who you want to see this experiment.
- What changes your variation makes.
- How you measure success.
Use this article to design and implement individual ideas and communicate with stakeholders.
- Experiment hypothesis.
- Business goals.
- Variation descriptions (wireframes or screenshots).
- Summary of all technical and design assets needed for the experiment.
- Parameters for significance and lift that indicate the change is implemented permanently.
People and resources
- Program manager
- Developer
- QA team
- Designer
- Sample size calculator
Actions to perform
- Create a test plan document.
- Create a QA checklist.
- Review and update the plan with stakeholders.
- Confirm the scope of the test.
- Define primary, secondary, and monitoring goals.
- Confirm stakeholders who create the required resources.
- Document responsibilities and deadlines (in Kanban, Gantt chart, or other internal method).
- Finalize test plan.
Deliverables
- Test plan document containing the following:
- All details for building an experiment.
- Technical requirements.
- Scope of the experiment.
- Creative assets or wireframes..
- Screenshots of variations
Watch out for
- Ill-defined scope.
- Lack of true hypothesis or goals.
- Lack of executive buy-in.
- Missing screenshots.
- Poor understanding of resource needs.
- Inaccurate effort estimates.
- Inadequate documentation for QA.
- The experiment plan is not being shared with the proper stakeholders.
- Lack of adherence to the experiment plan when building the test.
Create a basic experiment plan
Use this basic plan to manage the project and set expectations for external stakeholders. Establish the intentions of the proposed test for developers and designers who help you execute the plan. Provide all details needed to build and QA the experiment.
The plan also standardizes your testing practice and streamlines the approval process with stakeholders outside of your team. Create a list of QA visitor use cases or how a visitor arrives or navigates through the experiment, along with the expected result. Your QA team uses this list to evaluate the experiment before launch.
Create this plan as a shareable document that multiple stakeholders can reference. For example, you can create this document as a presentation slide, an email template, or a wiki page covering basic test information. If you have access to Collaboration, you can store your plan there. Strategic planners use this document to communicate the concept to the designers and developers responsible for implementing the experiment.
For maximum visibility, link your test plans to your prioritized list of ideas.
Define the primary metric, secondary metrics, and monitoring goals
When you create your experiment plan, decide how to measure success. In Optimizely Experimentation, the primary metric measures how your changes affect visitor behaviors. Consider setting secondary and monitoring goals to understand customer behaviors better and ensure the lift you see sets your program for success.
To learn more, see Primary metrics, secondary metrics, and monitoring goals.
Decide what type of experiment to run
The type of experiment depends on how you expect your changes to impact your primary conversion event.
The minimum detectable effect (MDE) can help you decide what type of test to run.
- A/B test – Run an A/B test when improvement in your primary goal can be attributed to a single change in your user experience.
- Multivariate test – Create a multivariate test to evaluate each combination of variables against others. This helps measure how multiple changes interact with each other and influence conversions.
- Multi-page test – Create a multi-page test to measure how changes affect visitors through each stage of the funnel. This helps measure success in conversions across a series of pages.
- A/B/n test – Test multiple versions of your page (A, B, and n more pages) without comparing all possible combinations of variations against each other. This test type is more economical, letting you attribute lift to certain changes without needing to run a full multivariate test.
- Multi-armed bandit optimization – Uses reinforcement learning to allocate traffic to variations that perform well while allocating less traffic to underperforming variations. MAB optimizations do not generate statistical significance. Instead, the algorithm pushes traffic to variations with the most conversions. The reason for a variation's performance is unimportant. Its tests are best suited for maximizing conversions during short, temporary experiences such as headline testing or a holiday weekend sale. You should never use MAB tests for exploratory hypotheses or variation selection. See Maximize lift with multi-armed bandit optimizations.
Please sign in to leave a comment.