- Optimizely Feature Experimentation
Follow this article to receive email notifications when new Optimizely Feature Experimentation content is added for 2026.
May 5
Added Owner and Teams fields to flags in Feature Experimentation. The Owner field defaults to the flag creator. You can edit it from the flag Settings page. The Teams field supports multiple teams per flag and displays as a column on the Flags list. See Manage flags for information.
April 29
Released the Optimizely Experimentation MCP server, which connects AI clients to your Experimentation data using the Model Context Protocol (MCP) open standard. Supported clients include Claude Desktop, Claude Code, Claude.ai, Cursor, Visual Studio Code with GitHub Copilot, and Windsurf. Authenticate with OAuth 2.0 through Opti ID and interact with your experimentation data through natural language.
April 28
- Released Contextual multi-armed bandits (CMABs). CMABs use Optimizely Opal to dynamically reallocate traffic to the best-performing variation for each user segment based on contextual attributes. CMABs continuously balance exploration and exploitation as response data accumulates, so personalized experiences reach users faster than with traditional A/B testing.
- Added the ability to copy a rule to another environment within the same flag. Copied rules arrive in Draft status in the destination environment, so you can review and edit them before saving. Use this to replicate experiment configurations across environments without manual recreation. See Manage rules for information.
- Released Custom Fields, which let you attach structured metadata to flags. Choose from five field types: Label, Text, Numerical, Link, or Boolean, and mark each field as required or optional. Display custom fields as columns on the Flags Dashboard to filter, sort, and group flags, which improves discoverability and keeps metadata consistent across teams.
- Released Flag Statuses, which automatically assign one of six lifecycle statuses to each flag: Draft, Testing, Live, Paused, Stale, or Archived. Statuses are calculated from a flag’s ruleset state, environment, and recent activity. Use flag statuses to surface flags that need attention and prevent unused flags from accumulating in your project.
March 30
- Added a callout that displays when an audience is in use in one or more rules. The callout indicates how many rules are associated with the audience and warns that updating the audience may change user targeting and influence experiment results. See the Edit audience section in the Target audiences in Feature Experimentation documentation for information.
March 13
- Released Global Holdouts, which lets you designate a small percentage of users as a control group, ensuring they do not experience any experiments or new feature rollouts. This lets you directly compare outcomes between users who experience variations identified as winners during A/B tests and those who only see the default "off" variation for feature flags in your project.
January 22
- Added the Experimentation Program Overview agent to the Opal Agent Directory which creates a report on how your Optimizely Experimentation program is performing using top experiment and win rate data.
- Released the following system tools in Optimizely Opal to help you with your Feature Experimentation projects. See System tools for Feature Experimentation for the full list of available Opal tools, including when to use them, variables, and example prompts.
-
exp_get_schemas– Retrieves detailed schemas for various Optimizely entities relevant to Feature Experimentation. -
exp_execute_query– Executes a template-based query to fetch specific data from your Optimizely Feature Experimentation instance. It lets you retrieve detailed information about various entities such as feature flags, rules, environments, attributes, and events, based on your specified criteria. -
exp_program_reporting_top_experiments– Retrieves experiments that have generated the highest positive or negative lift values within a specified timeframe. It focuses on the performance of individual tests, letting you see which variations of your features are performing best (or worst) against your chosen metrics. -
exp_program_reporting_underperforming_experiments– Helps you identify A/B tests that are not yielding significant or positive results, letting you quickly decide whether to stop, re-evaluate, or iterate on them. -
exp_program_reporting_win_rate– Gives you a high-level overview of the success of your experimentation program by calculating the win rate over a specified period. It is a key metric for understanding the efficiency and impact of your product development cycle.
-
Please sign in to leave a comment.