Troubleshoot data discrepancies between Optimizely and your analytics tool

  • Updated
  • Optimizely Web Experimentation
  • Optimizely Feature Experimentation
  • Optimizely Personalization
  • Optimizely Performance Edge

When the visitor or conversion counts on your Optimizely Experimentation Results page do not match another analytics tool, use this article to find the cause, decide whether the discrepancy needs action, and (when it does) narrow it to a specific source. For background on why discrepancies happen, see Discrepancies in third-party data. For deeper, query-level investigation in Experimentation Events Export (E3), see Diagnose data discrepancies with Experimentation Events Export in the developer documentation.

Prerequisites

Confirm the following before you compare data:

What to gather

  • Get a data export from the other analytics tool covering the same time range as Optimizely.

How to compare

  • Compare a single day, or a short range.
  • Exclude the first and last days of an experiment. Traffic ramps and tail-offs distort counts.
  • Compare total visitors before you compare conversions. The simpler the comparison, the easier the cause is to find.
  • Do not compare revenue. Outlier smoothing and high variability mean revenue figures rarely line up between tools.
  • Page views are usually the easiest metric to line up against total visitors.

What to communicate

  • Set expectations with stakeholders that pulling data takes time, and that some discrepancy is expected.

Quick checklist

Work through the following questions. If your answers are all yes and the discrepancy is still above 10%, follow Diagnose data discrepancies with Experimentation Events Export in the developer documentation.

  1. Is the discrepancy above 10%? – If not, see Why a small discrepancy is normal.
  2. Is the discrepancy in total visitors, or only in conversions? – Always investigate total visitors first, using a sitewide A/A test if possible, with no page activation logic. See Run and interpret an A/A test.
  3. Does the other tool run at the same time as Optimizely? – Cookie walls, consent banners, and redirects can cancel one tool's request while letting the other through.
  4. Does the other tool track the same URLs as Optimizely? – Match the domains and paths exactly.
  5. Is your visitor ID configured correctly in both tools? – The same person should resolve to the same ID in each tool.

If the previous answers are correct, move on to the in-depth checks in the Diagnose data discrepancies developer documentation, which covers the following:

  • Segmentation
  • Timing
  • Visitor ID
  • Referer and URLs
  • Experiments
  • User agents and bots
  • IP addresses
  • Timestamps
  • Attributes
  • Events

Why a small discrepancy is normal

Two analytics tools rarely match one-to-one. Treat a difference of 0–5% as expected, and 5–10% as acceptable but worth a quick check. A discrepancy does not invalidate your results. If your experiment has reached statistical significance, you usually see the same outcome in another tool that has more or less traffic.

The most common causes of a small discrepancy are the following.

Client-side network requests

If you send events from the client (Optimizely Web Experimentation or client-side Feature Experimentation SDKs), the visitor's browser sends each request. Some requests never arrive, for reasons that include the following:

  • The visitor closes their browser before the request goes out.
  • The visitor loses network connection.
  • An ad blocker, internet service provider (ISP), or firewall blocks the request.
  • One script fires earlier than another, so one tool starts counting sooner.
  • Some events take longer to reach their destination server than others.

Server-side network requests

When you send events server-side (Feature Experimentation server-side SDK or server-side Google Tag Manager), you have better visibility into whether a request arrived, and you retry failed requests. If you see a discrepancy from a server-side source, investigate at the E3 level. Small server-side discrepancies are less common.

Segmentation

The Optimizely Experimentation Results page only counts events fired after a visitor is bucketed into an experiment (after Optimizely assigns the visitor to a variation and records a decision event). Most other analytics tools count events fired before and after the decision, as long as both events fall in the same date range. This causes conversion-count differences between tools, even when the underlying data is the same.

Consent rules

Configure both tools to fire under the same consent conditions. Otherwise, your counts differ. For example, one tool fires only after the visitor accepts Analytical cookies. The other fires after Personalization cookies, or regardless of consent.

Different definitions of a visitor

Some analytics tools define a visitor differently, such as a different cookie, different parameters, stitching logic, or filters. This is uncommon, and when it does happen, the discrepancy is usually higher than 10%.

Filtering differences

One tool may filter out different bots, scrapers, or internal traffic than the other, which usually produces a discrepancy above 10%.

Most common causes of a discrepancy above 10%

If your discrepancy is above 10%, the cause is usually one of the following.

You are comparing the wrong data

Misaligned scope between the two reports is the most common large discrepancy. Look for differences in audience, domain, or source coverage.

  • The other tool's report is not segmented to the same audience as your Optimizely Experimentation report.
  • The other tool includes more sources or fewer sources (other sites, subdomains) than Optimizely.

Events fire at different times

When one tool starts counting before the other, conversion volumes diverge even though the underlying activity is identical.

  • A cookie wall blocks one tool but not the other.
  • A redirect cancels an event request mid-flight before it can finish.

Bots or automated testing inflate one source

Different bot-filtering rules can leave non-human traffic in one tool but strip it from the other, inflating one source's counts.

  • Optimizely filters bots by user agent, and the other tool may not, or may use a different bot list.
  • Internal Quality Assurance (QA) traffic or pre-render services count as real traffic in one tool but not the other.

If you suspect bot traffic, inspect user_agent and user_ip in your Experimentation Events Export. See Diagnose data discrepancies with Experimentation Events Export in the developer documentation for example queries.

Other known causes

The following less-common causes still account for some 10%+ discrepancies. Work through them after you have ruled out the more frequent causes in the previous section.

  • The visitor ID in the other tool is not device-based.
  • The other tool uses an aliasing mechanism, like Google Analytics blended mode.
  • The other tool applies IP filtering or bot filtering that Optimizely is not also configured to apply.
  • Optimizely's bot list filters out HTTP libraries used by some server-side SDKs and the Events API. If your traffic is filtered out, either turn off bot filtering or send a common browser user agent.

Related articles