The Optimizely Analytics Model Context Protocol (MCP) server responds to natural language. You do not need to know tool names, API parameters, or query syntax. Describe what you want and your agent selects the right tools to call.
Use these prompts to explore what the Analytics MCP server can do. Each prompt is a complete request you can paste into your AI client. Your agent chains the underlying oa_ tool calls for you.
If you have not connected yet, see Optimizely Analytics MCP server quickstart.
Get started
-
List my Optimizely Analytics apps. – Calls
oa_list_appsto retrieve every Analytics app linked to your account for identification and selection. - Set this as my default app. – Stores your app selection so future sessions skip the selection step.
- What can I do with Optimizely Analytics from here? – Returns an overview of available capabilities (event discovery, exploration building, dashboard creation, and experiment analysis) based on your connected app.
Your agent calls oa_list_apps automatically before any analytics request. When you set a default app, future sessions skip the selection step.
Query analytics data
Find events and properties:
- What events do we track in the onboarding flow? – Searches the event catalog for onboarding-related events and lists every action tracked during user setup.
- Find events in the Acquisition category. – Filters the event catalog to events tagged under the Acquisition category to reveal top-of-funnel tracking.
- What properties can I group page views by? – Lists the dimensions attached to page-view events so you can choose a grouping or segmentation axis for your exploration.
- What datasets are available for mobile users? – Queries the dataset catalog and filters to data sources relevant to mobile users or mobile-specific segments.
Find saved content:
- Find dashboards related to activation. – Searches saved dashboards by name and description for those focused on activation metrics.
- What explorations do we have on retention? – Searches the saved explorations library for retention-related analyses and returns their names and IDs.
- List metrics related to revenue. – Returns saved metric definitions whose names or formulas relate to revenue, helping you locate the right metric.
- Find experiments with 'checkout' in the name. – Searches the experiment catalog for A/B tests or rollouts that include checkout in their title.
Build explorations
Describe what you want to measure. Your agent finds the right events, validates filter values, and builds the exploration.
Event Segmentation:
- Break down page views by country this month. – Runs an event segmentation query on page-view events for the current month and groups results by the country property.
- Daily active users for the past 90 days. – Counts unique users per day over the past 90 days and produces a daily active user (DAU) time-series chart.
- Count of completed orders per week, grouped by plan tier. – Tallies completed orders per week and breaks them down by plan tier to compare purchase volume across tiers.
Funnels:
- Show me a funnel from sign-up to first export over the last 30 days. – Builds a sign-up-to-first-export funnel over the past 30 days and reports conversion and drop-off at each step.
- Build a checkout funnel: view cart, add to cart, start checkout, purchase. Last 14 days. – Creates a four-step e-commerce funnel using the specified events over the past 14 days and shows drop-off at each stage.
- Funnel from homepage view to demo request, grouped by traffic source. – Builds a two-step funnel and breaks down results by the traffic-source property to reveal which channels convert best.
Retention:
- How many users came back within 7 days of their first login? – Measures how many users returned within seven days of their first login using a retention analysis.
- Weekly retention for users who completed onboarding last month. – Segments users by onboarding completion last month and charts week-over-week return rates.
- Retention curve for mobile users after their first purchase. – Filters to mobile users, anchors on the first-purchase event, and generates a retention curve over subsequent intervals.
Build dashboards
- Create a growth dashboard with DAU, weekly retention, and the signup funnel. – Initializes a dashboard, builds three explorations (daily active users, weekly retention, and signup funnel), and adds each as a tile.
- Add a funnel from sign-up to activation to the growth dashboard. – Builds a sign-up-to-activation funnel exploration and appends it as a tile to the growth dashboard.
-
Add a header tile that says 'Acquisition' and arrange the charts in two columns. – Inserts a text tile labeled Acquisition and updates the dashboard layout to a two-column grid using
oa_add_text_tile_to_dashboardandoa_arrange_dashboard_tiles. - Pull up the growth dashboard and add a retention chart for new mobile users. – Retrieves the growth dashboard, builds a retention exploration filtered to new mobile users, and adds it as a tile.
Your agent creates the dashboard, builds each tile (find events, create exploration, add to dashboard), and arranges the layout.
Analyze experiments
-
Analyze the homepage redesign experiment. – Finds the experiment using
oa_find_experiments, fetches its scorecard usingoa_analyze_experiment, and summarizes variant performance and metric lift. - What does the checkout flow test show for the last 14 days? – Retrieves the checkout flow experiment's scorecard scoped to the past 14 days and highlights key metric movements across variations.
- Summarize the results of the pricing page A/B test, including statistical significance. – Fetches the pricing page experiment results and produces a summary that includes confidence intervals or p-values for each metric.
-
Pull the Explore tab for the onboarding experiment and tell me what stands out. – Retrieves Explore-tab tiles using
oa_experiment_explore_tab, analyzes each usingoa_analyze_explore, and highlight noteworthy patterns or anomalies.
Your agent finds the experiment with oa_find_experiments, fetches the scorecard with oa_analyze_experiment, and optionally lists or analyzes Explore tab tiles via oa_experiment_explore_tab and oa_analyze_explore.
Run metrics and saved content
- Run the DAU metric for the past week. – Executes the saved daily active user (DAU) metric against the past seven days and returns a daily time-series.
- Run the activation rate metric for last month, grouped by signup source. – Fetches the activation-rate metric, scopes it to the previous calendar month, and segments the output by signup-source property.
- Pull up the signup funnel exploration and filter to mobile users. – Loads the saved signup funnel exploration and applies a mobile-user filter.
- Show the activation funnel from last quarter. – Retrieves the saved activation funnel exploration and adjusts the date range to the previous quarter.
Multi-step workflows
You can chain prompts in a single conversation. Your agent keeps context across turns.
Workflow 1: Activation review
- Find the signup funnel exploration. – Searches saved explorations by name to locate and load the existing signup funnel.
- Add an email-verification step before the first export. – Modifies the loaded funnel by inserting an email-verification event as an intermediate step.
- Save the new version to the growth dashboard. – Adds the updated funnel exploration as a tile on the growth dashboard.
Workflow 2: New dashboard from scratch
- Create a retention dashboard for new mobile users. – Creates a dashboard and sets new mobile users as the base segment for all subsequent tiles.
- Add weekly retention, top events in week 1, and conversion to paid. – Builds three explorations (weekly retention, top events in week one, and conversion to paid) and adds each as a tile.
- Add a header and arrange in two columns. – Inserts a header text tile and updates the dashboard layout to a two-column grid.
Workflow 3: Experiment summary
- Find the experiments that ended in the last 30 days. – Queries the experiment catalog for experiments with an end date within the past 30 days.
- Which of those reached statistical significance? – Filters the returned list to experiments where at least one primary metric crossed the significance threshold.
- Draft a one-paragraph summary I can share for each significant result. – Generates a shareable summary for each statistically significant experiment covering the hypothesis, winning variant, and metric impact.
Tips for better results
- Be specific about scope – Mention the time range, segments, and grouping you care about. Last 30 days or for mobile users produces a tighter exploration than recent users.
- Use natural language – You do not need to know event names or category IDs. Describe what you want in product terms; your agent looks them up.
-
Filter casing is handled for you – Your agent looks up real stored values via
oa_find_column_valuesbefore filtering, so describe what you mean and let it match the casing. - Reference saved content by name or ID – Either works for explorations, dashboards, metrics, and experiments.
- Ask follow-up questions in the same conversation – Context persists, so a follow-up like now break that down by plan tier works without restating the original request.
-
Chain prompts for complex tasks – Start broad, then refine. The agent can string together several
oa_calls to build dashboards or run an audit.
Article is closed for comments.