System tools for Web Experimentation

  • Updated

System tools are built-in features that help Optimizely Opal take action. Each tool performs a specific task, such as creating a campaign, uploading files, or generating images. Think of tools like attachments on a Swiss Army knife. Each one has a distinct purpose that helps you get work done.

In addition to the system tools available in Opal, Optimizely Web Experimentation includes a set of system tools designed to support your experimentation strategy and help with changes in the Visual Editor. These Opal system tools are grouped by functionality to make it easier to find the right tool for your workflow.

You can ask Opal what tools it has at any time. For example, enter "Please list the tools you have with a brief description of what they do and the parameters" into Opal Chat.

Click a tool's name to expand it and learn when to use it, its required and optional parameters, and example prompts to calling the tool. If you do not provide a required parameter, Opal prompts you for it.

Experimentation context

  • Opal cannot look up detailed results for a specific experiment you ask about by name. However, it can show your top-performing and underperforming experiments with their lift and significance data. For full experiment results, use the Optimizely Experiment Results page.
  • The feature is not connected to the Optimizely Analytics product. Opal does not have access to Analytics explorations or custom analyses.
  • Results data support is still evolving. While Opal can surface high-level performance information, deeper results integration is still in progress.
exp_get_schemas – Retrieves detailed schemas for various Optimizely entities relevant to Web Experimentation. This is a mandatory first step before using the exp_execute_query tool to retrieve actual data about these entities.
  • When to use
    • If you are new to querying or unsure which entities (like experiment, page, audience, event, campaign, experience, extension) are available in Web Experimentation.
    • Before you write any query using exp_execute_query, you need to know the exact field names, their data types, and any relationships for the entities you are interested in.
    • To see how, for example, experiments are linked to pages or audiences.
  • Parameters
    • entities – A list of entity types for which you want to retrieve schemas. For Web Experimentation, common entities include experiment, page, audience, event, campaign, experience, and extension.
    • (Optional) include_dependencies – A boolean value that, when set to true, includes schemas for related entities in the response.
  • Example prompts
    • What fields can I query for my A/B tests?
    • Show me the schema for the 'page' entity, including any related entities.
    • List all the properties of an audience entity in Web Experimentation.
    • What are the available fields for events that I can use in my web experiments?
exp_execute_query – Executes a template-based query to fetch specific data from your Optimizely Web Experimentation instance. It lets you retrieve detailed information about various entities. 

Example entities in Web Experimentation include the following:

  • Experiments – A/B tests, multivariate tests, and their variations, statuses, and metrics.
  • Pages – The specific URLs or page types where your experiments run.
  • Audiences – The user segments that your experiments target.
  • Events – The conversion events and metrics you track.
  • Extensions – Integrations and custom code snippets.
Before using the exp_execute_query tool, you should use the exp_get_schemas tool to understand the available fields and structure for the entities you want to query. This ensures you build your query template correctly.
  • When to use
    • Get a list of experiments. For example, all running A/B tests or experiments that are paused.
    • Filter experiments based on their status, associated pages, targeted audiences, or primary metrics.
    • Retrieve specific details about a particular experiment, page, or audience.
  • Parameters
    • template – A complete query template that defines the steps, filters, and fields you want to retrieve. The template is written in a specific format that outlines how to fetch and structure the data.
    • (Optional) project_id – If you want to query data within a specific project, provide its ID. If you do not provide a project_id, the query attempts to run across all projects you have access to. You can often find the project_id in your Optimizely URL (for example, https://app.optimizely.com/v2/projects/1234567890/...).
  • Example prompts
    • Show me all currently running A/B tests in my project.
    • List all pages that have active experiments on them.
    • Find all experiments that use the 'Homepage' page and are targeting the 'New Visitors' audience.
    • Give me the details of the 'Red CTA Button' experiment, including its variations and primary metric.
exp_program_reporting_top_experiments – Helps you identify and analyze your highest-performing experiments (A/B tests, multivariate tests, and so on) within Optimizely Web Experimentation. It provides a program-level view of success by ranking experiments based on their lift values.

The tool returns detailed information for each experiment, including the following:

  • Layer Name – The human-readable name of the experiment layer.
  • Project ID – The identifier of the project the experiment belongs to.
  • Variation Name – The name of the variation that achieved the lift.
  • Metric Name – The specific metric used to calculate the lift.
  • Relative Lift Value – The numerical lift value (can be positive for winning or negative for losing).
  • Is Significant – Indicates whether the result is statistically significant.
  • Winning Direction – The direction of the winning variation (for example, "increasing").
  • Start Date – When the experiment began. 
  • Last Modified – When the experiment was last updated.
The exp_program_reporting_top_experiments tool does not include results data for experiments that use stats accelerator as the traffic distribution mode.
  • When to use
    • Quickly find which A/B tests have generated the most significant positive or negative impact.
    • Understand the overall success or areas for improvement within your experimentation program.
    • Easily gather data to showcase successful strategies or learn from underperforming tests.
    • Compare experiment results over different periods or across projects.
    • Use insights from past top experiments to guide the planning of new tests.
  • Parameters
    • date_range – The inclusive start and end timestamps in ISO-8601 format (for example, YYYY-MM-DDTHH:MM:SSZ or with offset). This defines the period over which experiments are evaluated.
    • (Optional) direction – Specifies whether to rank by "winning" (highest positive lifts) or "losing" (largest negative lifts).
    • (Optional) metric_name – The name of the metric to use for calculating lift. If omitted, it defaults to the primary metric.
    • (Optional) page_size – The maximum number of experiments to return. Must be between 1 and 100 (inclusive). If omitted, the backend applies its own default.
    • (Optional) project_ids – A list of numeric project IDs to filter the results by. If omitted or empty, the tool may aggregate across all authorized projects.
  • Example prompts
    • Show me my top 20 winning experiments in 2025.
    • What were the biggest losing experiments between 2024-06-01 and 2024-09-30?
    • Show me the top 5 experiments that improved the 'Add to Cart' metric this year.
    • List the top 10 experiments with the highest positive lift for project ID 19963491707 in the last 6 months.
exp_program_reporting_underperforming_experiments – Helps you identify A/B tests that are not yielding significant or positive results, letting you quickly decide whether to stop, re-evaluate, or iterate on them.
The exp_program_reporting_underperforming_experiments tool does not include results data for experiments that use stats accelerator as the traffic distribution mode.
  • When to use
    • Find experiments that are not reaching statistical significance, have confidence intervals that include zero, or show negative lift.
    • Quickly pinpoint tests that you should stop to free up traffic and resources for more promising ideas.
    • Analyze historical data to understand common pitfalls or patterns in underperforming tests.
    • Check the health of your running experiments to intervene if they are clearly not progressing towards a positive outcome.
  • Parameters
    • date_range – Specifies the inclusive start and end timestamps in ISO-8601 format (for example, {"start": "2025-01-01T00:00:00Z", "end": "2025-12-31T23:59:59Z"}). This defines the period for which experiments are analyzed.
    • (Optional) project_ids – A list of specific project IDs to filter the experiments. If omitted, the tool aggregates data across all authorized projects.
    • (Optional) page_size – The maximum number of underperforming experiments to return (between 1 and 100).
    • (Optional) metric_name – The name of the metric to filter by. If omitted, it defaults to the primary metric on the backend. Examples include "primary", "Add to Cart", or "conversion_rate".
  • Example prompts
    • Show me all underperforming A/B tests from the last quarter.
    • Which experiments should I stop or review from the last 6 months for Project X?
    • Find experiments with low significance for the 'Add to Cart' metric this year.
    • List all experiments that are not performing well between January 1, 2025, and June 30, 2025.
    • Are there any underperforming tests in my 'Website Redesign' project?
exp_program_reporting_win_rate – Gives you a high-level overview of the success of your Experimentation program by calculating the win rate over a specified period. It is a great way to track the overall effectiveness of your A/B tests.
The exp_program_reporting_win_rate tool does not include results data for experiments that use stats accelerator as the traffic distribution mode.

The win rate is defined as the following:

  • Wins – Experiments that are positive and statistically significant (based on a selected metric). 
  • Total – All paused and concluded experiments within the specified scope. 
  • Win rate – \[ \text{Win Rate} = \left( \frac{\text{Wins}}{\text{Total}} \right) \times 100 \]
  • When to use
    • Get a quick snapshot of how successful your experimentation program is over time.
    • Monitor changes in your win rate to see if your experimentation strategy is improving.
    • Provide a clear, high-level metric on the ROI and impact of your A/B testing efforts.
    • Understand if certain periods or projects have higher or lower win rates, which can inform resource allocation or strategic adjustments.
    • Answer questions like the following:
      • "What is our overall experiment win rate?"
      • "How many experiments did we win last quarter?"
      • "What is the win rate for experiments focused on our primary conversion metric?"
  • Parameters
    • date_range – The inclusive start and end timestamps in ISO-8601 format (for example, YYYY-MM-DDTHH:MM:SSZ or with offset). This defines the period over which the win rate is calculated.
    • (Optional) project_ids – A list of numeric project IDs to filter the experiments by. If omitted or empty, the tool may aggregate across all authorized projects.
    • (Optional) metric_name – The name of the metric used to determine what constitutes a "win" (for instance, a positive and significant result). If omitted, it defaults to the primary metric configured on the backend.
  • Example prompts
    • What was our experiment win rate for the last quarter?
    • Calculate the win rate for all experiments in project ID 19963491707 between January 1st, 2025, and March 31st, 2025.
    • Show me the win rate for experiments focused on the 'signup_completion' metric this year.
    • What is the overall win rate across all our web experimentation projects for the past 6 months?

Visual Editor

exp_suggest_visual_editor_component_improvements – Is used by Opal's Variation Development Agent as an expert system that acts as a Web Design, User Experience (UX), and CRO specialist. It analyzes a selected section of your website within the Visual Editor and generates multiple distinct design variations of improvements. Each variation is a cohesive concept with several interrelated, actionable suggestions across different categories to enhance the user experience and achieve specific design goals.
  • When to use
    • Get creative and actionable design improvement ideas for a specific component or section of your website.
    • Explore multiple conceptual approaches to enhance a part of your page (for example, one variation might focus on conversion, another on accessibility, and another on visual polish).
    • Receive categorized suggestions (like clarity and readability, visual hierarchy, UX, engagement, accessibility, responsiveness, and brand consistency) to guide your design decisions.
  • Parameters
    • When you ask Opal to use this tool, you typically describe the specific component or section you want to improve and the overall goal or focus for the improvements. The Visual Editor's context automatically provides the technical details of the selected element (like its HTML and CSS). Therefore, you do not directly provide parameters like design_suggestions in your prompt. Instead, your input is a natural language request for improvements, and the tool then generates and returns the design_suggestions as its output.
  • Example prompts
    • Suggest design improvements for this hero section to increase engagement.
    • Give me some variations to improve the conversion rate of this product display area.
    • How can I make this checkout summary section more user-friendly and visually appealing?
    • I need ideas to improve the readability and accessibility of this blog post content block.
exp_ve_apply_change – Applies a pending change to the Visual Editor.
  • When to use
    • When you have reviewed and confirmed a visual editor change and want to make it live.
    • After creating a change using other exp_ve tools and you are ready to commit it.
    • To publish a set of modifications to an experiment or personalization.
  • Parameters
    • change_id – The ID of the change to apply.
  • Example prompts
    • Apply the visual editor change with ID change-123.
    • I'm ready to publish the changes for my experiment. Please apply change change-abc.
    • Can you apply the pending change change-xyz to the visual editor?
exp_ve_element_glob – Finds elements in the Visual Editor DOM that match a glob pattern.
  • When to use
    • To locate multiple elements with similar IDs or class names using a wildcard pattern.
    • When you need to identify a group of elements for a bulk operation. For example, changing text on all buttons matching btn-*.
    • To explore the structure of a page and find elements without knowing their exact selectors.
  • Parameters
    • pattern – The glob pattern to match against element IDs or other attributes.
    • (Optional) url – The URL of the page to search. If not provided, Opal uses the current page in the Visual Editor. 
  • Example prompts
    • Find all elements on the current page that have an ID starting with product-*.
    • Glob for elements matching *.promo-banner on https://www.example.com/.
    • Show me all elements that match the pattern div#main-content > p.*.
exp_ve_element_grep – Searches for elements in the Visual Editor DOM whose content or attributes match a regular expression.
  • When to use
    • To find elements containing specific text, like "Add to Cart" buttons.
    • To locate elements with attributes, like data-test-id, that match a certain pattern.
    • When you need to identify elements based on complex text or attribute patterns not covered by simple selectors.
  • Parameters
    • pattern – The regular expression pattern to search for.
    • (Optional) url – The URL of the page to search. 
    • (Optional) attribute – The attribute to search within, for instance, id, class, or text. If not provided, Opal searches the element's text content. 
  • Example prompts
    • Find elements with text content matching Buy Now|Add to Cart.
    • Grep for elements where the data-component attribute contains navigation.
    • On https://www.example.com/, find all h2 elements whose text matches ^Product [A-Z].
exp_ve_element_read – Reads the content or attributes of a specific element in the Visual Editor.
  • When to use
    • To inspect the current text content of a heading or paragraph.
    • To retrieve the value of an attribute, such as href from a link or src from an image.
    • To verify the current state of an element before making a change.
  • Parameters
    • selector – The CSS selector of the element to read.
    • (Optional) url – The URL of the page where the element is located. 
    • (Optional) attribute – The specific attribute to read. If not provided, it returns the element's text content.
  • Example prompts
    • Read the content of element elem-123 in the visual editor.
    • Can you get the details of the element with ID section-hero?
    • Retrieve the data for element button-cta-main.
    • Show me what is inside element image-banner-top.
exp_ve_element_tree – Retrieves a partial DOM tree for a specified element and its descendants in the Visual Editor.
  • When to use
    • To understand the structure of a specific section of a webpage.
    • When you need to see the children or siblings of an element to formulate a more precise selector.
    • To debug layout issues or identify nested elements.
  • Parameters
    • selector – The CSS selector of the root element for which to retrieve the tree.
    •  (Optional) url – The URL of the page.
    •  (Optional) depth – The depth of the tree to retrieve. For example, 1 for immediate children or 2 for grandchildren. Defaults to 1. 
  • Example prompts
    • Show me the element tree for #footer with a depth of 2.
    • Get the immediate children of the element .product-details.
    • Display the DOM structure around div.sidebar on https://www.example.com/.
exp_ve_get_change_schema – Retrieves the JSON schema for a specific type of Visual Editor change.
  • When to use
    • When you need to understand the required and optional fields for creating a new change. For example, a "set text" change.
    • To validate a change object you are constructing programmatically.
    • To explore the capabilities and parameters of different Visual Editor change types.
  • Parameters
    • change_type – The type of change. For example, set_text, insert_html, set_attribute.
  • Example prompts
    • What is the schema for a set_text change in the Visual Editor?
    • Show me the JSON schema for an insert_html change.
    • I need to create a set_attribute change; what parameters does it require? Provide the schema.
exp_ve_get_selector – Generates a robust CSS selector for a given element in the Visual Editor.
  • When to use
    • When you need a reliable way to target a specific element for a change.
    • If you are having trouble manually crafting a unique selector for an element.
    • To get a selector that is less likely to break with minor DOM changes.
  • Parameters
    • element_id – The ID of the element in the Visual Editor to get the selector for.
    • (Optional) url – The URL of the page where the element is located.
  • Example prompts
    • Get the CSS selector for the element with ID button-buy-now.
    • What is the best selector for the element element-123 on https://www.example.com/product/1?
    • Generate a selector for the element identified as hero-image-container.
exp_ve_list_pending_changes – Lists all pending Visual Editor changes for the current experiment or personalization.
  • When to use
    • To review all modifications that have been made but not yet applied.
    • To check if any changes are awaiting approval or further action.
    • Before applying changes, to ensure no unintended modifications are present.
  • Parameters
    • None.
  • Example prompts
    • List all pending Visual Editor changes.
    • What are the current unapplied changes in the Visual Editor?
    • Show me a summary of all changes that have not been published yet.
exp_ve_revert_change – Reverts a specific pending Visual Editor change, undoing its effects.
  • When to use
    • To undo a change that was made incorrectly or is no longer desired.
    • If a change is causing issues during testing and needs to be rolled back.
    • To clean up unwanted modifications before applying the final set of changes.
  • Parameters
    • change_id – The unique identifier of the change to be reverted.
    • experiment_id – The ID of the experiment to which the change belongs.
    • (Optional) variation_id – The ID of the variation within the experiment where the change was applied.
  • Example prompts
    • Revert change chg-789 in experiment exp-123.
    • Can you undo the change chg-456 for variation var-001 in experiment exp-456?
    • I need to revert a visual editor change. The change ID is chg-101, and it's in experiment exp-789.
    • Please revert change chg-222 in experiment exp-333 for variation var-444.
    • Undo the change with ID chg-555 in experiment exp-666.
exp_ve_validate_change – Validates a Visual Editor change object against its schema to ensure it is well-formed.
  • When to use
    • Before attempting to apply a change, to catch any structural errors.
    • When programmatically constructing a change, to ensure it adheres to the expected format.
    • To debug issues with a change that is not behaving as expected.
  • Parameters
    • change – The JSON object representing the Visual Editor change to validate.
  • Example prompts
    • Validate this Visual Editor change: { "type": "set_text", "selector": "#title", "value": "New Title" }.
    • Check if the following change object is valid: { "type": "insert_html", "selector": ".container", "html": "

      New Content

      ", "position": "after" }.

    • I have a change object; please validate it for correctness.

If you use Opti ID, administrators can turn off generative AI in the Opti ID Admin Center. See Turn generative AI off across Optimizely applications.