- Optimizely Web Experimentation
Follow this article to receive email notifications when new Optimizely Web Experimentation content is added for 2025.
December
- Added improvements to the AI variation development agent by letting you describe the change you want Opal to make inside the Element Change window, making it easier to access the agent and update your elements faster.
October
- Released the AI variation development agent, which helps you modify and update existing website elements, create new ones, and generate and apply enhancement suggestions, while maintaining brand consistency by automatically retrieving page styles.
- Added the Experiment Review Agent for Web Experimentation, which lets you use Optimizely Opal to review your experiment configuration and recommend changes to maximize your odds of reaching statistical significance.
September
- Released the new version of the Visual Editor that uses an overlay instead of an iframe, letting you directly interact with the site itself and adjust the visitor experience for your experiment.
- Added the new Optimizely Reporting Metric Impact Report dashboard for Experimentation, which aggregates data on the impact of your metrics.
- Added the sample size calculator to the Web Experimentation platform. This lets you estimate test duration, providing a more comprehensive view of experiment planning and eliminating the need for manual calculations of visitors per variation.
July
- Contextual bandits are now generally available for personalization campaigns. Contextual bandits help you deliver the most personalized variation to each user by dynamically re-allocating traffic based on the primary metric and user attributes.
- Moved the checkbox to enable sending experiment data to Google Analytics 4 (GA4) through Google Tag Manager (GTM) from the experiment-level to the project-level.
- Released ratio metrics for A/B and multivariate tests, which let you select different events for the numerator and the denominator to reflect business-specific key performance indicators, such as revenue per add-to-cart click or feature use per account. See Create a ratio metric in the metric builder for instructions.
June
- Added the ability to instantly deploy a winning variation.
May
- Added the ability to upload a screenshot for each variation.
- Removed the
/metricsendpoints from the Optimizely API. Use the Metrics Hub Service API's/metricsendpoints instead. - Updated the Experimentation Usage & Billing dashboards to include monthly active users (MAUs) by experiment and project.
- Released Optimizely Opal Chat for Experimentation. Opal automates tasks, surfaces insights, and guides decision-making.
- Released the Optimizely Opal results summary, which automatically summarizes your A/B test results in plain language.
- Added the ability to ideate with Opal to get test ideas.
- Released the @ExperimentPlan prebuilt instruction agent to get feedback on a test plan from Opal.
Usage and billing update
Effective May 7, 2025, access to Optimizely Opal features across Content Marketing Platform, Web Experimentation, Feature Experimentation, Personalization, Content Management System (SaaS), Collaboration, and Optimizely Data Platform will transition to a credit-based usage and billing model.
For a full list of Optimizely Opal features, see Optimizely Opal and AI features.
April
- Added the Summary page to experiments, which provides a high-level summary of the experiment settings that you can download and share with stakeholders for easier reporting on the experiments you are running.
- Added the ability to conclude and deploy the winning variation of an experiment (beta). This reduces the need for developer resources and lets you allocate traffic to a winning variation. This feature also lets you deploy any variation of a concluded experiment.
- Added the ability to upload images to the visual editor from the Optimizely Digital Asset Management (DAM) library. You must have an Optimizely product that has DAM to use it with Optimizely Experimentation.
- Added the ability to link a Collaboration hypothesis from a new Web Experimentation A/B test. See Link an experiment in Optimizely Web Experimentation to a hypothesis in Collaboration.
Usage and billing update
Effective May 7, 2025, access to Optimizely Opal features across Content Marketing Platform, Web Experimentation, Feature Experimentation, Personalization, Content Management System (SaaS), Collaboration, and Optimizely Data Platform will transition to a credit-based usage and billing model.
For a full list of Optimizely Opal features, see Optimizely Opal and AI features.
March
Experiment menu configuration
- Updated the experiment menu. This UX/UI enhancement streamlines the experiment configuration workflow by providing more in-app guidance. This was grouped and ordered to guide users in configuring a test step-by-step, improving the usability of a test’s configuration.
Warehouse-Native Experimentation Analytics
Warehouse-Native Experimentation Analytics is now generally available. The integration brings the elements of warehouse-native Optimizely Analytics to Feature Experimentation and Web Experimentation. Teams can analyze experiment performance, identify winning variations, and conduct deeper analyses on experiments that ensure data security and privacy and avoid data duplication or movement.
- Enhance experimentation results by integrating Optimizely Experimentation data with additional insights from the data warehouse. See scorecard for more information.
- Specify key user interactions to assess engagement and evaluate impact using custom events.
- Create specific experiment-focused metrics (such as conversion, numeric aggregation, ratio, and more).
- Segment users by common behaviors into cohorts for precise analysis and targeted insights.
- Create custom metrics and derived columns to transform data to gain deeper insights.
- Use the Stats Engine to ensure reliable results and advanced analysis capabilities.
- Use CUPED to reduce the impact of random variation and surface insights quicker.
- Switch effortlessly between configuring experiments and conducting deep experimentation analysis from both Feature Experimentation and Web Experimentation.
- Filter results by user segments, analyze trends over time, and track variation performance through designated funnels via Experimentation Analytics > Explore.
- Uses the Opti ID Admin Center for user management, giving you a single login point to switch among your Optimizely products. See the Opti ID documentation to learn more about how to use it.
- Updated the Analytics UI to match Optimizely styling.
Learn more about Warehouse-Native Experimentation Analytics.
February
Released AI variation summary. This lets you use AI to generate descriptions for your variations, summarizing what element or custom code changes were made and providing the variation's purpose. You can better understand how changes affect variation results to decide about your experiments and campaigns.
January
- Released experience templates (formerly extensions) for general availability, which lets you streamline campaign and experiment creation and reduce code duplication. You can use the pre-built templates provided by Optimizely to quickly create a campaign or experiment with little to no development or build your own. See Get started with experience templates for information.
- Enhanced how Preview mode loads by only querying the relevant pages associated with the experiment. This has led to improvements such as decreasing the load time for some users from 30 seconds to 3 seconds.
Please sign in to leave a comment.