- Optimizely Web Experimentation
- Optimizely Performance Edge
- Optimizely Feature Experimentation
- Optimizely Full Stack (Legacy)
Some optimization programs choose to test small site changes but pause the testing when working on a redesign to prevent testing content that will change. However, they can miss valuable opportunities to test, learn, and iterate. Testing through a redesign offers an opportunity to build the best experience for higher conversions.
For examples of how experienced testing organizations test through their redesigns, see these case studies:
- How to Redesign Your Redesign: an interview with Jeff Blettner at Formstack
- Spreadshirt: How A Data-Driven Site Redesign Lifted Clicks 606% and Purchases 11%
- Soccerloco: an ecommerce redesign that literally paid for itself
Aim for big changes
Testing during a redesign lets you test big changes instead of many small changes. By thinking and testing big changes, you see statistically significant results sooner, and you are more likely to find optimal experiences. Site-wide redesigns are often the best time to test global, big-picture changes, such as different menu bars or product categories.
Making bigger changes helps you explore potentially winning possibilities, rather than just refine a decision already made.
This is not an easy change for some businesses. However, businesses that continually refine and tweak usually see diminishing returns on their optimization program compared to those who explore.
Use data-driven redesigns
Many businesses complete site redesigns based on opinions from executives, design firms, and other internal stakeholders. Opinions can hinder providing the best user experience or creating conversions because the opinions are not from the users.
As you redesign, get qualitative feedback from your users through user tests, focus groups, and beta testing. Pair the qualitative feedback with quantitative data generated through A/B tests. These techniques show the real issues affecting customers and which changes will address those issues.
For each test you run, the qualitative feedback that you collect should drive the hypotheses that you test. Using feedback-based hypotheses helps you explore ideas systematically and then develop the solutions that work.
Test component by component
Companies often wonder whether to roll out two dramatically different experiences and test them against each other,or test component-by-component changes until they have a new experience.
Testing entire site designs at once helps you explore different concepts, but visitor bias can affect this approach. Visitors may be uncomfortable and react negatively to seeing a fully difference experience.
Testing through a redesign lets your design team apply human insights, creativity, and knowledge about user experience to the problems shown in your data. You can then use insights from testing to learn more about what your customers do and do not like and form a hypothesis to test.
Like user testing, A/B testing lets you explore different concepts and gather real data on their impact.
Testing big changes in phases rather than all at once can help isolate and minimize the impact of each change. A more advanced testing program may then test combinations of these changes with a multivariate experiment to see the interaction effects. Finally, you can A/B test entirely different experiences against one another to see the total effect. With the data on each component, you can use the insights for your redesign.
In Optimizely Experimentation, your re-design process might follow this pattern:
- Prioritize the areas of your site that should be redesigned, based on the areas where you see the most user feedback or friction. Remember, not everything needs to be redesigned.
- Start with simpler A/B tests on components of your site. If you are working with the global navigation or other changes that affect multiple changes, use substring match URL targeting in Optimizely Web Experimentation to apply the change across multiple pages.
- Move to multivariate tests to analyze combinations of changes or multi-page tests to analyze the effect of different changes along a multi-page flow or funnel.
- Run redirect experiments that show completely different experiences to visitors.
Set goals for interactions and programs
When you test components of your redesign, you have two goals:
- Interaction-Level Goals – Identify whether the component provides a better user experience (UX).
- Program-Level Goals – Identify whether the component, paired with the other components, provides overall value to your program-level business metrics.
These should be in alignment. For example, if your goal is to increase video plays, which leads to higher ad revenue, then testing a layout that yields more video-play conversions is ideal.
For another example, if your goal is to increase average order value (AOV) to lead to higher revenue per visitor (RPV), you may experiment with a site design that makes it easier to select a product and check out. However, that does not increase the AOV of those orders but increases checkout button clicks instead.
These insights, which may be unintuitive, show which changes are best for your site. In areas where you provide a better or more efficient UX but at the expense of your program goals, you notice the difference between local-maximum decisions and global-maximum decisions.
Setting a combination of interaction-level and program-level goals helps you learn more from each piece of the redesign and keeps the redesign in alignment with your business goals. Your business is less likely to discover these insights by testing incremental changes after a redesign than during a redesign.