- Optimizely Web Experimentation
- Optimizely Personalization
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack (Legacy)
Use a prioritization framework to evaluate your ideas and decide which experiments and campaigns to run first. A basic prioritization framework uses consistent criteria to order the experiments and campaigns to run from first to last. This helps you manage your backlog and experiment cycles and ensures that your most impactful tests run first.
This basic template for prioritizing ideas (.xlsx) can help you get started. A concept called minimum detectable effect (MDE) can also help you prioritize tests based on the expected ROI.
A full roadmap includes a scoring rubric and an execution timeline.
Materials to prepare
- List of test ideas including:
- Dependencies
- Effort estimates
- Impact estimates
- Time to significance and available traffic
- Likelihood of implementation
- Criteria for prioritization
- Resourcing constraints
- Insights from technical audit
People and resources
- Program manager
- Developer
- Designer
- Executive sponsor
Actions to perform
- Score impact versus effort.
- Use MDE to estimate impact versus effort.
- Add experiment tags or category labels.
- Rank experiments
- Create a balanced approach that optimizes different goals.
- Schedule tests and campaigns in the roadmap.
- Document the prioritization process.
- Socialize an optimization culture at the company.
Deliverables
- A prioritized list of experiments and campaigns or an advanced roadmap
What to watch out for
- It can be difficult to quantify the impact of experiments.
- If you do not use a prioritization scheme, you may prioritize ideas according to top-down trends in the company.
- Without outlining dependencies in advance, you may slow down testing or be unable to run a test.
- A lack of documentation can slow down a team with a large roadmap.
- An unbalanced roadmap can over-index certain parts of the site and miss other opportunities.
- Roadmaps that are entirely agile or waterfall each present difficulties in planning.
Define your prioritization criteria
Evaluate ideas based on two factors: impact and effort. High impact or low effort depends on your company's business goals and your team's resources.
Impact – The metrics measuring the success of your optimization program and how Optimizely Experimentation events directly influence these metrics
Effort – What is easy or difficult to do and what resources are dedicated for testing, including any shared or borrowed from other teams
Team members to consult:
-
Program manager – Responsible for the framework and final scoring
-
Developer – Responsible for estimating the effort
-
Design team – Responsible for estimating the effort
-
Executive sponsor – Responsible for reviewing and approving the prioritized list and providing strategic alignment
The criteria you use to prioritize your ideas depends on your program’s goals and resources. For example, your team may have the technical resources to set up testing but are low on mockup resources. Or, if you have executive buy-in but find it difficult to get time with your developers, you may find that advanced test ideas are approved but slow to be implemented.
These factors are important to consider when deciding on the criteria for effort and impact.
Prioritize your list
Assign effort and impact scores to every optimization idea to obtain a more granular view of each experiment's relative ROI. This is known as a blended model. Assign numerical values for effort and impact, then sum the impact and effort scores for each test and campaign to generate a single prioritization score.
You can use broad categories like high, medium, and low when evaluating impact and effort.
In the example above, high-impact tests and campaigns have high numerical scores. However, high effort ideas are given low numerical scores. Summing the two scores puts the high-impact, low-effort ideas at the top of the list. You should run these tests and campaigns first.
You can further enforce consistency in prioritization by creating a detailed rubric. The effort scores should match your team's strengths. Adjust the impact scores based on business goals.
If you have a mature optimization program with dedicated developer resources, you may be able to focus on tests based on impact without needing to weigh effort.
You can also evaluate additional attributes. See the blog post on prioritization for more information.
Review your process
Once you have prioritized your ideas and run a few, review how well your process works for your team. Below are two questions to consider.
Should you use a backlog or roadmap?
When you prioritize your testing ideas, you can put them into a backlog or a full, prioritized roadmap. The first option is more flexible, but the second provides a stable cadence for organizing workflows.
A prioritized backlog is like a queue. When you finish one idea, go to the next. If you find an idea with great potential and want to focus your efforts there, you can.
With a full, prioritized roadmap, you commit to a timeline based on how long you think an experiment will run. Most mature programs prefer this approach, as it lets them coordinate stakeholders and schedule a complex workflow.
With a full roadmap, you plan your work in advance. You also build a regular cadence for incorporating insights and trends from completed tests and campaigns into a new round of testing. If you want to return to an idea, you can re-prioritize it and add a second iteration later.
Each method gives you insights from completed tests and campaigns to help you re-prioritize for the next test.
Are you over or under-prioritizing?
As you learn the cadence of your optimization program’s work cycle, evaluate whether you are trying to do too much or not enough.
Prioritizing 25 ideas but only executing five may mean you are over-prioritizing. Teams that over-prioritize consistently fail to implement lower roadmap ideas. Evaluate whether low-priority hypotheses are worth prioritizing again. If fewer ideas help you align your roadmap to your team's cadence, consider keeping the list short.
If you feel like you do not have enough high-impact ideas, you may be under-prioritizing. Teams that under-prioritize often run out of ideas before the cycle ends and return to ideation. Consider focusing your efforts on generating more ideas for your backlog. An idea submission form and a business intelligence report can help you increase the number of high-quality hypotheses.
Automate idea submissions and ranking
Consider automating an idea submission process at your company. By automatically scoring the ideas submitted to your team, you can evaluate and prioritize them more easily.
Create an idea submission form that asks questions about the resources and skill sets required for a given idea. The responses to this form populate a spreadsheet with built-in formulas that add or subtract points based on the responses. You can then sort the ideas by scores.
A formalized process of collecting and prioritizing ideas focuses your team on running strong tests and campaigns and pursuing wins that generate excitement for experimentation. Publishing your automated submission form company-wide can also help to increase awareness of prioritization and optimization goals.
By equipping your team with a prioritized list of tests and campaigns, you build a sustainable and effective testing program.
Please sign in to leave a comment.