Understand your influence analysis

  • Updated

This article explains how to interpret the results of an Influence analysis in Optimizely Analytics. It covers the key statistics calculated by the template and describes how to read and understand the individual modules within the exploration template. For information on how to set up an Influence analysis, see Influence analysis overview. The Influence template has the following modules:

Actors module

The Actors module lets you define the primary entity for your analysis and specify which subset of actors to include. For example, set the Actor to Accounts and include only the Enterprise Customers cohort to analyze how group-level activity drives their renewal rate.

The Actors module has the following fields: 

  • Actor field – Selects the dataset that is the fundamental unit of analysis, determining what entity events and metrics are attributed to. When set to Users, every event or metric is attributed to a specific person, making it ideal for optimizing individual user journeys. For example, a streaming site can determine if a user who Watches a Trailer is then more likely to Play Content.
  • Included Actors (Optional) – Filters the actor population to a specific cohort rather than analyzing all actors. Select a previously defined cohort or create a new one by clicking + New Cohort.
    • When your Actor is Users, the template lets you compare influences across audience segments. For example, you can analyze if Using a Discount Code (test event) increases Average Session Length (target metric) for New Users versus Loyal Customers.
    • When your Actor is something else, like Accounts, the template lets you focus on specific business segments or tiers. For example, you can analyze if Adding a Collaborative Document (test event) leads to Subscription Renewal (target event) for only your Enterprise Accounts.
oa-inf-10.png

Test module

The Test module is where you define the events, metrics, or cohorts you want to analyze as potential drivers of your chosen target. To get the most value, align your Test type with your specific analytical goal. Use the Type field to specify whether your test is an EventMetric, or Cohort. For more information, see Test types. 

Test types

Event

When you select Event as the test type, you analyze how specific events influence a target (event, metric, or cohort) by comparing the likelihood or impact for users who did or did not perform those events. For example, you can analyze if an Add to Cart event influences a Make Purchase target differently depending on whether it occurred on a Mobile or Desktop device.

When you select Event as the test type, the following fields display: 

  • Events – Lets you select one or more events as your test variable, such as Add to Cart
  • Group by Test Property – Groups the analysis by a property of the selected test event. Defaults to Event Type but can be changed. For example, grouping Add to Cart by Device Type shows how influences vary across Mobile, Desktop, and App.

Cohort

When you select Cohort as the test type, you analyze how belonging to a specific user cohort influences your chosen target. The Cohorts field lets you select one or more cohorts as your test variable and compare them against your target to determine their impact. For example, a streaming service can set Premium Subscribers as the test cohort to analyze if they are significantly more likely to Start a New Series compared to the average user base.

Metric

When you select Metric as the test type, you analyze how the value of a specific metric influences your chosen target. This can involve correlation analysis (for Metric → Metric) or impact analysis (for Metric → Event/Cohort). For example, you can analyze if a higher Average session duration (test metric) strongly correlates with an increase in Monthly recurring revenue (target metric) for your subscription platform.

The Test Metrics field lets you add one or more metrics as your test variable by clicking +

oa-inf-11.png

Target module

The Target module is where you define the specific outcome or behavior you want to understand or influence. For example, define Made Purchase (an event), Average Session Length (a metric), or High-Value Customers (a cohort) as your target depending on your analytical goal.

Use the Type field to specify whether your target is an Event, Metric, or Cohort. For more information, see Target types

oa-inf-14.png

Target types

Event

Select Event as your target to define a specific user action or conversion point as your primary outcome, and analyze how various test factors (events, metrics, or cohorts) influence the likelihood of it occurring.

The following fields display:

  • Events – Selects the single event you want to set as your target.
  • Direction (Required, if Test is Event) – Defines the temporal relationship between your test and target events:
    • Causes (Before Target) – Analyzes test events that occur before the target event.
    • Effects (After Target) – Analyzes test events that occur after the target event.
  • Max time between test and target (Required, if Test is Event) – Sets the maximum time window (for example, 7 days) within which a test event must occur relative to the target event to be considered relevant.

When Event is your target, you can combine it with the following test types:

  • Events → Event – Select one or more test events that you suspect might influence your target event. You can also select All events to explore broadly. Use Direction (Causes or Effects) and configure the Max time between test and target to define the time window. The stats used in this configuration are Likelihood impact + % of actors who did test.
  • Metrics → Event – Select one or more metrics that you suspect might influence your target event. The stat used in this configuration is Metric impact
  • Cohorts → Event – Select one or more cohorts that you suspect might influence your target event. The stats used in this configuration are Likelihood impact + % of actors who did test

For example, if your target is Make purchase and your test is Add to cart, the template calculates the Likelihood Impact by dividing the purchase rate of users who added to cart by the purchase rate of all users; a result of 2x indicates that users who added to cart are twice as likely to complete a purchase compared to the average user.

oa-inf-20.png

Cohort

Select Cohort as your target to define a specific user group as your primary outcome of interest. This is ideal when your goal is to grow a user segment, improve retention, or understand the factors that define important user groups.

The Cohorts field selects a single user cohort as your target. The analysis identifies how test factors associate with belonging to that cohort.

When Cohort is your target, you can combine it with the following test types:

  • Events → Cohort – Select one or more events that you suspect might be associated with your target cohort. The stats used here are Likelihood impact + % of actors who did test.
  • Metrics → Cohort – Select one or more metrics that you suspect might be predictive of membership in your target cohort. The stat used here is Metric impact.
  • Cohorts → Cohort – Select one or more cohorts that you suspect might be associated with your target cohort. The stats used here are Likelihood impact + % of actors who did test.

For example, if your target cohort is Loyalty program subscribers and your test is First time visitors, the template calculates the Likelihood Impact by dividing the percentage of first-time visitors who are in that cohort by the overall percentage of users in that cohort; a result of 1.5x indicates that first-time visitors are 50% more likely to be loyalty members than the average user.

oa-inf-19.png

Metric

Select Metric as target when your primary objective is to optimize a quantifiable business outcome. This mode helps you understand the drivers of performance indicators. When you select Metric as the target type, you define a specific quantifiable metric as your primary outcome of interest. 

The Metric field lets you select a single metric as your target. You can add only one metric. 

When Metric is your target, you can combine it with the following test types:

  • Events → Metric – Select one or more events that you suspect might influence your target metric. The stats used here are Metric impact + % of actors who did test.
  • Metrics → Metric – Select one or more metrics that you suspect might correlate with your target metric. The stat used here is Correlation.
  • Cohorts → Metric – Select one or more cohorts that you suspect might influence your target metric. The stats used here are Metric impact + % of actors who did test.

For example, if you select Total purchase revenue as your target metric and Average order value as your test metric, the template calculates the Correlation (Pearson r) between them; a result such as r = 0.85 indicates a strong positive relationship, meaning that as users' average order values increase, their total purchase revenue also tends to rise significantly.

oa-inf-18.png

Filters module

After setting up your influence template by defining your target, test, and actor modules, the Filters module lets you narrow down the data presented in your visualizations. This makes it easier to answer exploratory questions and drill down into specific segments of your analysis.

For instance, if your main analysis is for All Users, you can use a filter to quickly see results for Mobile Users or users from a specific region without having to reconfigure your entire Actor module. This provides flexibility for ad-hoc exploration and deeper dives into your insights.

Analytics also lets you choose JSON columns in this module. When you click a JSON column, it expands to display the available keys for that column. You can choose a key and click Apply. The selected end key becomes the display name for that column.

Visualization module

The Visualization module lets you run and view the analysis as charts. You can edit the visualization, inspect the underlying SQL query, and add the chart to a dashboard. Learn about Visualization options and Dashboards.

The following are the sections in this module:

  • # and % – Switch between numbers and percentage values on your chart.
  • Top 'n' segments – Choose the number of top segments you want to display in the visualization.
  • Time range and time grain – Configure the time range and time grain for the analysis.
  • Period-over-period comparison – Compare analysis results from the current period with any previous periods.
  • Sampling – Configure sampling and responses for the analysis.
  • Edit visualization – Inspect a query, edit visualization, add an analysis to a dashboard, and download as CSV.

Switch between numbers and percentages

Click # or % to switch the values on the Y-axis from absolute values or percentages, respectively. This is useful when you work with multiple segments, as absolute values provide a clearer breakdown of the number of individuals in each segment.

Top 'n' segments

To select the number of top segments to display in your impact analysis, click the Top drop-down list in the visualization window and enter the desired number. Alternatively, use + and to adjust your input. The total count of selected events determines the top attribute values. You can also set the number of top 'n' segments when creating beelines.

opti-imp-10.png

Time range and grain

  • Time grain – Choose the granularity of analysis, such as daily (one day), weekly (seven days), and so on.
  • Time range – Choose the complete period during which events are considered for the analysis. Examples include the last two years or the time range between two specific dates.

You can set the time range using a drop-down list or choose from the quick options and iterate through different choices of time range and time grain without leaving the chart. You can also set a lag by clicking Offset and adjusting the Ending.

opti-imp-11.png

Period-over-period comparison

Analytics lets you perform period-over-period comparisons within explorations. You can compare analysis results from the current period with any previous period, such as the previous day, week, month, or year, helping you understand how metrics have changed over time. For information, see Period-over-period comparison in explorations.

Sampling

Click Sampling Mode to configure the sampling modes for your exploration. The drop-down list displays the following sampling modes:

  • Enabled Faster Response
  • Enabled Higher Precision

For information, see Sampling.

opti-imp-12.png

Edit visualization section

Click More Options (⋮) to access the options that let you customize the visualization.

  • Inspect Query – Modify the SQL and NetScript queries using the Query Inspector, which includes SQL, Warehouse SQL, and NetScript tabs. Click Open as SQL Explore or Open as NetScript Explore to make changes in the respective sections.

    opti-imp-13.png
  • Add to Dashboard – Add your created exploration to a dashboard by choosing Pick a dashboard or New Dashboard.

    opti-imp-14.png
  • Edit visualization – Customize the chart settings through Data and Style.
    • Data – Enable automatic refresh, select the chart type, configure the X-axis and Y-axis, set up tooltips, and adjust size.
    • Style – Configure the following:
      • Display legend – Control the display of the legend on the chart.
      • Legend position – Configure where you want to position the legend. The options available are top, bottom, left, and right.
      • Stacked chart – Enable this option if you want to see values for different categories stacked on top of each other within a single bar or along a single axis.
      • Compact layout – Create sparklines to visualize trends across multiple measures with compact, in-line charts to spot patterns and gain insights at a glance.
      • Tooltip mode – Configure the tooltip position. The options available are nearest, X-axis, Y-axis, and index.Screenshot 2025-05-21 at 5.41.59 PM.png
  • Download As – Download your analysis in CSV or Excel format, with column formatting preserved.

    Screenshot 2025-05-21 at 5.42.12 PM.png

Statistics interpretation

Once you define your target, test, actors, and time range, the Influence template automatically calculates key statistics to quantify the relationships in your analysis. This module provides a detailed breakdown of these statistical outputs, helping you understand the "why" and "how much" behind the influences you identify.

The specific statistical interpretations available depend on the combination of your chosen test and target types. At a high level, these statistics help you understand:

  • Likelihood impact – Measures how much more (or less) probable an outcome is given a specific behavior or group. 
  • % of actors who did test – Quantifies the proportion of your audience engaging with a particular test.
  • % of actors who are in test cohort – Quantifies the proportion of your total audience that belongs to a specific test cohort.
  • Metric impact – Determines the quantitative change in a metric's value due to a test factor.
  • Correlation – Identifies whether two metrics tend to move together.

Likelihood Impact

This statistic measures how much more (or less) likely users who performed a test action are to also perform the target action, compared to those who did not. It helps you identify which user behaviors or features are most influential in driving a target outcome.

It applies to the following test → target types:

  • Events → Event
  • Cohorts → Event
  • Events → Cohort
  • Cohorts → Cohort

Calculation(Conversion rate of test group) / (Baseline conversion rate)

  • Conversion rate of test group – The percentage of users who performed the test event (or belong to the test cohort) and subsequently performed the target event (or belong to the target cohort).
  • Baseline conversion rate – The percentage of all users who performed the target event (or belong to the target cohort). 

Example

If users who trigger the Product Video Viewed event are 3x more likely to also trigger the Add to Cart event, it shows videos strongly drive purchase intent.

Imagine you have 100 total users included in the analysis. Out of these 100 users, 20 (20%) Add to Cart. This is your baseline conversion rate. Now, let's say 50 users watched the Product Video Viewed (your test group). Out of those 50 video watchers, 30 (60%) went on to Add to Cart.

The likelihood impact is calculated as: (60% conversion rate for video watchers) / (20% baseline conversion rate) = 3x. This means video watchers are three times more likely to add to cart.  

A result of 3x means video watchers are three times more likely to add to cart, highlighting the video's strong influence.

The following Events → Event analysis tests multiple events against the Make Purchase target for Users within a 7-day window. The scatter plot shows that Browse, View Content Details, and Search are the strongest drivers of purchase, sitting in the Higher Impact, Higher Reach quadrant — meaning they are both widely performed and highly influential. Pause Content, Resume Content, and Play Content have an even higher likelihood impact but are performed by fewer users, while Marketing Touchpoint shows comparatively limited reach and influence.

A screenshot of a computer

AI-generated content may be incorrect.

% of actors who did test

This statistic tells you what percentage of your total audience actually performed a specific test action or belonged to a certain group (cohort). It is the proportion of your entire user base that engaged with the test.
This helps you understand the reach or adoption of a particular feature or campaign, giving context to other impact metrics.

It applies to the following test → target types:

  • Events → Event
  • Cohorts → Event
  • Events → Cohort
  • Cohorts → Cohort
  • Event(s) → Metric
  • Cohort(s) → Metric

Calculation (Count of users who did test event) / (Count of all users) 

Example

If only 10% of your total user base triggered the Promotional Banner Clicked event, it tells you the banner's reach is limited.

Imagine you have 100 total users. Only 10 of those users clicked on Promotional Banner Clicked

The % of actors who did the test is calculated as: (10 users who clicked) / (100 total users) = 10%. This shows that only a small fraction of your audience interacted with the banner.

The following screenshot shows a Cohorts → Cohort analysis that tests three cohorts — Made Purchase within 7 Days of Signup, Regular Watchers, and Viewed Recommended Content — against the target cohort Loyalty Program Subscribers for All Users.

The scatter plot shows that Viewed Recommended Content is the strongest predictor of loyalty program membership, sitting in the Higher Impact, Higher Reach quadrant — meaning it is both widely adopted and highly influential. Made Purchase within 7 Days of Signup shows moderate reach and impact, while Regular Watchers is the weakest predictor of the three. Overall, engaging with recommended content appears to be the most significant behavioral driver of loyalty program membership, though these relationships should be validated through further experimentation.

% of actors who are in test cohort

This statistic tells you what percentage of your total audience belongs to a specific test cohort. It is the proportion of your entire user base that falls within the defined group you are testing. This helps you understand the size and reach of a particular cohort, giving context to other impact metrics and helping you assess how representative or niche the group is within your overall audience.

It applies to the following test → target types:

  • Cohorts → Event
  • Cohorts → Cohort
  • Cohort(s) → Metric

Calculation (Count of users who did belong to test cohort) / (Count of all users)

Example

If only 15% of your total user base belongs to the Loyalty Program Members cohort, it tells you this is a relatively small but potentially high-value segment of your audience.

Imagine you have 100 total users. Only 15 of those users belong to the Loyalty Program Members cohort.

The % of actors who are in the test cohort is calculated as: (15 users in cohort) / (100 total users) = 15%. 

This shows that while the cohort represents a small fraction of your overall audience, any strong influence this group has on your target outcome is particularly meaningful given its limited size.

Metric impact

This statistic measures how much a specific test action changes a key number (a metric) for users who performed that action, compared to the average for all users. It is a ratio comparing the average metric value for the test group versus the overall average.

It applies to the following test → target types:

  • Metrics → Event
  • Metrics → Cohort
  • Events → Metric
  • Cohorts → Metric

Calculation – (Average target or test metric value for test group / target cohort) / (Average target/test metric value for all users)

  • Average target/test metric value for test group / target cohort – The average value of the target/test metric for users who performed the test event (or belong to the target/test cohort).
  • Average target/test metric value for all users – The average value of the target/test metric across all users.

Example

If users who trigger the Chatbot Interaction event have an Average Order Value that is 1.2x higher, it suggests that the chatbot boosts spending.

Consider 100 total users. The Average Order Value across all 100 users is $50. Say, 30 users had a Chatbot Interaction. For these 30 users, their Average Order Value is $60.

The Metric Impact is calculated as: ($60 Average Order Value for chatbot users) / ($50 Average Order Value for all users) = 1.2x.  

This means that chatbot users spend 20% more on average.

Correlation

This statistic measures whether two metrics tend to move together, producing a value between -1 and +1. It identifies potential relationships within your user experience, helping you spot trends and areas for further investigation, but does not imply that one metric causes the other.

It applies to the Metric(s) → Metric test and target type.

Calculation – Pearson correlation coefficient (r).

  • Values range from -1 to +1
    • +1 – Strong positive correlation (metrics increase/decrease together).
    • -1 – Strong negative correlation (as one metric increases, the other decreases).
    • 0 – No linear correlation.

Example

A strong positive correlation (for example, r = 0.9) between two metrics means the two metrics tend to increase together. For example, users who spend 60 seconds on a product page might have a 5% purchase rate, while users who spend 120 seconds might have a 10% purchase rate.

However, correlation does not imply causation; other factors may explain the relationship. For example, highly motivated buyers may naturally spend more time on a page before converting. Use this as a signal to form a hypothesis and validate it through controlled experimentation before drawing conclusions.

The following example is a Metrics → Metric analysis testing three metrics — Average Titles Viewed per UserAverage Ad Revenue per User, and Total Purchase Revenue — against the target metric Total Time Viewed (TTV) for All Users over a 30-day period.

The bar chart displays the Pearson Correlation (r) value for each test metric against the target, revealing the following:

  • Average Ad Revenue per User (r = 0.94) has the strongest correlation with Total Time Viewed, suggesting that users who generate more ad revenue tend to watch significantly more content.
  • Average Titles Viewed per User (r = 0.87) also shows a strong positive correlation, indicating that users who watch more titles also tend to accumulate more total viewing time.
  • Total Purchase Revenue (r = 0.82) has the weakest correlation of the three, though it still represents a strong positive relationship, suggesting that purchasing behavior is also associated with higher total viewing time.
All three metrics show strong positive correlations with Total Time Viewed, indicating that engagement and revenue metrics tend to move together. However, it is important to note that these correlations do not imply causation — they signal relationships worth investigating further through controlled experimentation.
A screenshot of a computer

AI-generated content may be incorrect.