- Incorporate Optimizely Feature Experimentation into your product development process
- Base product decisions on data generated from the experimentation process
- Create, test, and roll out features with Optimizely Feature Management
For product teams, one of the pitfalls of the feature development lifecycle is the risk of investing time, effort, and money in building features that your customers will not use. A new feature may not deliver actual business value or be what your customers need. Integrating experimentation into your product development process gives you several advantages when building and iterating on new features:
-
Use real-world data to discover how your customers engage with new features.
-
Deliver functional prototypes of new features early to learn about their potential value.
-
Roll out new features gradually to mitigate risk.
-
Optimize for growth and performance through conversion rate optimization and growth experiment techniques.
You can use Optimizely Feature Experimentation's Feature Management capabilities to drive adoption, engagement, and conversion for new features at any stage of the product development process.
Painted door tests are commonly used to validate demand and gather data before committing resources to specific projects. Companies with tighter integration between experimentation and product development appreciate the ability to create minimum viable product experiments and engage in feature rollouts. They use experimentation to build data and fine-tuned controls into the entire product development process.
This article shows you an overview of how to use Optimizely Feature Experimentation's Feature Management capabilities to bring a culture of experimentation to your company’s product development process.
Discover what your customers want
If you are just getting started with integrating experimentation into your product development lifecycle, you should probably start with the discovery phase. What do your users want? Many development teams think they know the answer. But those answers are not always based on data.
One way to find out is to see what new feature offerings your users will actually interact with, through a painted door test. Instead of building out an entire feature, you simply create the suggestion of a feature. This shows you how many users might want to use it before you invest significant resources in its development. They are also useful for assessing discoverability.
In most cases, you build a painted door test in Optimizely Web Experimentation and not Optimizely Feature Experimentation. However, the test itself would usually be intended to validate investment in a higher-cost, higher-fidelity Optimizely Feature Experimentation feature.
For example, say Optimizely believed some customers might be interested in an automated personalization feature. We could run a painted door test comparing different approaches for naming and describing this feature:
When a user clicks on either of those highlighted menu options, they see this message:
In the process, Optimizely learns several important things:
-
Is there enough interest in the feature to justify its development?
-
Which customers have use cases that this feature can address?
-
Which approach to naming and description drives engagement better?
Painted door tests are a great, low-investment way to gather data from users and validate the features you decide to build.
Design features and test them
Sometimes, you may want to deploy several quick and efficient experiments to evaluate a feature idea without allocating scarce resources to develop a comprehensive version of it in advance. A minimum viable product test, in which you build out a basic, bare-bones version of a feature you are considering, is a good way to do this. The MVP version of the feature would be functional, but only just. Conceptually, it is similar to a feature rollout in Optimizely Feature Experimentation; the difference is that in an MVP test, the feature is not expected to be finished.
For example, a media company might consider a feature that lets readers easily share the site’s content with their own contacts. With an MVP test, they can start by building it out for whichever platform would be easiest, whether that is mobile, web-only, or even email. The idea is to engage users with the new feature in a way that validates the idea that people will use it but also delivers insights on how they are most likely to use it and how often.
Build features
At this point, you have collected data on your users' needs and preferences, and you have a design in mind. The next step is to build your feature in Optimizely Feature Experimentation. That means setting up feature flags that let you roll out a new feature—or switch it off if something goes wrong—at the exact moment you are ready, as well as a set of variables that define how the feature looks and acts.
For more detailed information on feature flags, see our Developer Docs.
Run experiments on new features
But what if you have not settled on one design for your new feature? Maybe your data did not point to a single likely solution to your users' problems. Maybe you would like to run an experiment that pits two or three potential designs against each other, and then roll out the winner.
This is where you use a feature test. They are similar to A/B tests in Optimizely Web Experimentation but include a few components specific to Optimizely Feature Experimentation. Use them to experiment with different versions of your new feature to see which one performs best or to measure the new feature's performance against the current experience. You will get a better understanding of the feature's potential impact on your key metrics without requiring you to deploy any new code. You can also use feature tests to iterate on failed features until they meet your organization’s standards for launch.
Launch and roll out new features
Once your feature is built and you select the variation you want to use, it is time to roll it out to your customers.
If you are concerned that your new feature might actually degrade the visitor experience instead of improving it, you can start small by launching the feature to only a small percentage of visitors, and then observe the results. Feature rollouts in Optimizely Feature Experimentation use feature targeting to control which users access new features and when that happens.
Use targeted rollouts to provide beta access to new features or to experiment with features internally before a public rollout. A slower, controlled rollout mitigates risk by limiting the impact of any newly-surfaced bugs that may have slipped through an internal QA process.
You can also use a rollout to launch winning variations that you identify through experimentation. Just gradually expose a winning feature variation to your visitors in a controlled fashion. You can set feature visibility based on a specified environment, user characteristics like subscription tier, authentication state, region, or language, and manage the process on your own terms.
Iterate and improve
Now, you have rolled out your new feature to your entire customer base. And so far, they seem to like it. Adoption is high and engagement is up. Congratulations!
But that does not mean you are done. Optimizing for greater and greater success is a never-ending process. For example, after your feature launches, you might notice something unexpected about the way your customers use it. Maybe you want to iterate on your new feature, so you can improve conversions or engagement or whichever metric you use to gauge success, and roll those changes out quickly—without waiting for the next code deploy.
Use feature configurations to make changes and create variations through Optimizely without deploying code. Usually, this will involve running feature tests to determine the optimal combination of variable values, and then setting those values as your default feature configuration and launch using a rollout.
See our Developer Docs on the subject for even more in-depth info.