- Optimizely Feature Experimentation
Follow this article to receive email notifications when new Optimizely Feature Experimentation content is added for 2026.
March 30
- Added a callout that displays when an audience is in use in one or more rules. The callout indicates how many rules are associated with the audience and warns that updating the audience may change user targeting and influence experiment results. See the Edit audience section in the Target audiences in Feature Experimentation documentation for information.
March 13
- Released Global Holdouts, which lets you designate a small percentage of users as a control group, ensuring they do not experience any experiments or new feature rollouts. This lets you directly compare outcomes between users who experience variations identified as winners during A/B tests and those who only see the default "off" variation for feature flags in your project.
January 22
- Added the Experimentation Program Overview agent to the Opal Agent Directory which creates a report on how your Optimizely Experimentation program is performing using top experiment and win rate data.
- Released the following system tools in Optimizely Opal to help you with your Feature Experimentation projects. See System tools for Feature Experimentation for the full list of available Opal tools, including when to use them, variables, and example prompts.
-
exp_get_schemas– Retrieves detailed schemas for various Optimizely entities relevant to Feature Experimentation. -
exp_execute_query– Executes a template-based query to fetch specific data from your Optimizely Feature Experimentation instance. It lets you retrieve detailed information about various entities such as feature flags, rules, environments, attributes, and events, based on your specified criteria. -
exp_program_reporting_top_experiments– Retrieves experiments that have generated the highest positive or negative lift values within a specified timeframe. It focuses on the performance of individual tests, letting you see which variations of your features are performing best (or worst) against your chosen metrics. -
exp_program_reporting_underperforming_experiments– Helps you identify A/B tests that are not yielding significant or positive results, letting you quickly decide whether to stop, re-evaluate, or iterate on them. -
exp_program_reporting_win_rate– Gives you a high-level overview of the success of your experimentation program by calculating the win rate over a specified period. It is a key metric for understanding the efficiency and impact of your product development cycle.
-
Please sign in to leave a comment.