- Identify the right prioritization framework for your testing team and download a template
- Organize your list of test ideas from first-to-run to last
How do you decide what to optimize and when? To figure out which experiments and campaigns to run first and which to place into your backlog, use a prioritization framework to evaluate your ideas.
A basic prioritization framework uses consistent criteria to order, from first to last, the experiments and campaigns you will run. Use a prioritization framework to manage your backlog and experiment cycles, and ensure that your most impactful tests run first.
-
This basic template for prioritizing ideas (.xlsx) can help you get started. A concept called minimum detectable effect (MDE) can also help you prioritize tests based on the expected ROI.
-
A full roadmap includes a scoring rubric and an execution timeline.
Read on to learn about basic prioritization as well as how to automate your process.
- List of test ideas including:
- Dependencies
- Effort estimates
- Impact estimates
- Time to significance and available traffic
- Likelihood of implementation
- Criteria for prioritization
- Resourcing constraints
- Insights from technical audit
- Program Manager (responsible for final scoring)
- Developer (responsible for estimating effort)
- Designer (responsible for estimating effort)
- Executive sponsor (review, approve, and provide strategic alignment)
- Score impact versus effort
- Use MDE to estimate impact versus effort
- Add experiment tags or category labels
- Rank experiments
- Create a balanced approach that optimizes for different goals
- Schedule tests and campaigns in the roadmap
- Document the prioritization process
- Socialize an optimization culture at the company
- A prioritized list of experiments and campaigns, or an advanced roadmap
- It can be difficult to quantify the impact of experiments
- If you do not use a prioritization scheme, you may end up prioritizing ideas according to dominant trends in the company, top-down
- Without outlining dependencies in advance, you may slow down testing or be unable to run a test at all
- A lack of documentation can slow down a team with a large roadmap
- An unbalanced roadmap can over-index certain parts of the site and leave other opportunities on the table—balance by UX theme, location, tactics, and goals pursued
- Roadmaps that are entirely agile or entirely waterfall each present difficulties in planning
This article is part of the Optimization Methodology series.
Define your prioritization criteria
We suggest you evaluate ideas based on two factors: impact and effort. What counts as high impact or low effort? This depends on your company's business goals and your team's access resources.
Impact: What metrics will you use to measure the success of your optimization program? Which events in Optimizely Experimentation directly influence these metrics?
Effort: What is easy or difficult to do? Which resources are dedicated to testing and which are shared or borrowed from other teams?
Team members to consult:
-
The program manager, who is responsible for the overall framework and final scoring
-
The developer, who is responsible for estimating effort
-
The design team, who is responsible for estimating effort
-
The executive sponsor, who will review and approve the prioritized list, and provide strategic alignment
Ultimately, the criteria you use to prioritize your ideas will depend on your particular program’s goals and resources.
For example, your team may be technically savvy but low on design resources, so you set up tests easily but have trouble getting mockups. Or, if you have executive buy-in but find it difficult to get time with your developers, you may find that advanced test ideas are quickly greenlit but slow to be implemented.
These types of factors are important to consider when deciding on the criteria for effort and impact.
Prioritize your list
Assign effort scores and impact scores to every optimization idea and prioritize accordingly.
You can use broad categories like high, medium, and low when evaluating impact and effort.
You should run high-impact, low-effort tests and campaigns first.
Or, you can assign numerical scores; scores can help provide a more granular view of the relative ROI of each experiment. This is known as a blended model.
Assign numerical values for effort and impact. Then, simply sum the impact and effort scores for each test and campaign to generate a single prioritization score that combines both sets of criteria.
In the example above, high impact tests and campaigns are given high numerical scores. However, high effort ideas are given low numerical scores.
When we sum the two scores, high-impact, low-effort ideas rise to the top of the prioritized list. You should run these tests and campaigns should first.
You can further enforce consistency in your prioritization process by building out a detailed rubric. Customize the weights of your effort scores to the strengths of your team. Adjust the weights of your impact scores according to the goals that are most important to your business.
With a rubric, you can consistently and objectively prioritize all your test and campaign.
If you have a relatively mature optimization program with dedicated developer resources, you may be able to focus solely on tests based on impact, without needing to weigh effort. For inspiration, check out Hotwire’s binary scoring matrix to learn how they run over 120 tests a year.
Review your process
Once you have prioritized your ideas and run a few, set time aside to review how well your process works for your team. Below are two questions to consider.
Should you use a backlog or roadmap?
When you prioritize your testing ideas, you can put them into a backlog or a full, prioritized roadmap. The first option offers more flexibility but the second provides a stable cadence for organizing a complex workflow.
A prioritized backlog is just a queue. Once you’re done with one idea, you reach for the next. If you happen to chance upon an idea with great potential and you would like to focus your efforts there for a while, you can.
With a full, prioritized roadmap, you commit to a timeline based on how long you think an experiment will run. Most mature programs prefer this approach, as it allows them to coordinate stakeholders and schedule a complex workflow.
With a full roadmap, you plan more of your work in advance. You also build a regular cadence for incorporating insights and trends from completed tests and campaigns into a new round of testing. If you would like to return to an idea, you can re-prioritize it and add a second iteration further down the line.
No matter which method you choose—roadmap or backlog—insights from completed tests and campaigns will help you re-prioritize in the next round.
Are you over or under-prioritizing?
As you get to know the cadence of your optimization program’s work cycle, evaluate whether you are over-prioritizing (putting too many ducks in a row) or under-prioritizing your list.
Do you prioritize 25 ideas but execute just four or five before the next planning phase begins? You are probably over-prioritizing. Teams that over-prioritize consistently fail to implement ideas lower in the prioritized roadmap. Evaluate whether low-priority hypotheses are worth prioritizing again and again. If fewer ideas would help you align your roadmap to your team's cadence, consider capping the list at a lower number.
Do you feel you never have enough high-impact ideas? Teams that under-prioritize often run out of ideas before end of the cycle and find themselves returning to the ideation phase. If this is the case, consider focusing your efforts on generating more ideas for your backlog. An idea submission form and a business intelligence report can help you increase the number of high-quality hypotheses.
Automate idea submissions and ranking
Consider automating an idea submission process at your company. By automatically scoring the ideas submitted to your team, you can evaluate and prioritize them more easily.
Create an idea submission form that asks questions about the resources and skill sets required for a given idea. The responses to this form populate a spreadsheet with built-in formulas that add or subtract points based on the responses. Voila! As soon as an idea is submitted, a score is generated based on your prioritization framework. Your ideas can be automatically sorted by those scores.
A formalized process of collecting and prioritizing ideas focuses your team on running strong tests and campaigns and pursuing huge wins that generate excitement for experimentation. Publishing your automated submission form company-wide can also help to democratize prioritization and spread awareness about optimization goals.
By equipping your team with a mechanism for outputting a well-prioritized list of tests and campaigns, you take a critical step towards building a sustainable and effective testing program.