Discrepancies in third-party data

  • Updated
  • Optimizely Web Experimentation
  • Optimizely Personalization
  • Optimizely Performance Edge
  • Optimizely Feature Experimentation
  • Optimizely Full Stack (Legacy)

Optimizely Experimentation offers many analytics integrations in addition to developer tools that let teams use custom integrations and has many partners offering third-party integrations. With the analytics platforms you can use to measure the impact of your A/B tests, you will quickly find out that your numbers will not match perfectly from one platform to the next.

If you are seeing a discrepancy in your data, make sure you are following some best practices so any involved platforms or datasets are measuring the same thing. If your datasets are not based on the same information, user set, or activity, then they probably will not align.

Common causes for discrepancies

To ensure that your data can be compared to Optimizely Experimentation results, the following best practices can get results to better align by making adjustments to your reports.

Not integrated

If you do not have an integration between Optimizely Experimentation and your other platforms, there is no expectation that they will match.

Users need to be tagged with Optimizely Experimentation experiment and variation information when they are exposed to an A/B test. Your other platforms’ data should be filtered to show only that which contains experiment and variation information.

You can use many integrations. If there is no integration for the platform you want to integrate with, you can create a custom analytics integration. 

Optimizely Feature Experimentation integrations with analytics providers. See Set up analytics platforms for ideas to integrate your platforms.

Filters

Refine your results to categories of users and attributes such as web browser, location, language, and plan type. Limit reports to a specific date and time range and filter results from certain IPs. Check the following for consistency from platform to platform:

  • User attributes
  • Date or time range
  • IP filtering
  • Bot filtering

User scope

Make sure you understand how each platform counts so you can account for any differences.

  • Optimizely Web Experimentation and Optimizely Feature Experimentation calculate results based on unique user counts.
  • Optimizely Personalization computes based on unique sessions. See how Optimizely Experimentation counts conversions.
  • Other platforms may count results differently, resulting in different counts for users who are tagged as having seen an Optimizely Experimentation experiment.

For example, Optimizely Experimentation and Google Analytics(GA)  use a different “visitor” definition. 

  • Google Analytics uses a tracking call that is session-based, meaning a single visitor can trigger multiple visits over a given period of time (GA Support Article)
  • Optimizely Experimentation, on the other hand, uses a cookie with a 6-month expiration time (from the last visit) and counts unique visitors.

Events

If you have similar visitor counts but different conversion counts for an event, look closely at the event on each platform. Two similarly named events are not necessarily tracking the same action visitors took.

For example, you may have a “signup completed” event tracking a form submission. In Optimizely Experimentation, you may have this configured as a “submit” button click metric or a confirmation page view metric. Each of these events represents the same thing (the user submitted the form), but they are not tracking the same action, which can lead to a discrepancy. The user may have clicked the "submit" button without filling out the form, or maybe the user submitted the form by pressing Enter, bypassing the button click event.

Check with your engineers to see how each event is being tracked. Make sure your events track the same things the same way. If they do not, you will have dissimilar data.

Audience

In Optimizely Experimentation, a user's actions only count toward A/B tests for which they still meet the audience conditions. If your other analytics platform continues to track users after they no longer meet the audience condition, your Optimizely Experimentation results page may show lower total conversions.

Attribution

Optimizely Experimentation’s attribution model may differ from those of other analytics platforms.

Optimizely Experimentation has decision-first counting, which means conversions are counted only from users who have sent a decision event. When an A/B test activates, the decision event fires. If a conversion event fires before the decision, it is not counted on the results page but may be counted in your other analytics platform.

Segmentation values may have differing attribution also. Optimizely Experimentation attributes segment values at the user level using the most recent value for each segment in each session.

Other discrepancies

Discrepancies caused by differences in event tracking probably cannot be aligned without making adjustments and running a new experiment. A new experiment can track new information about users that might aid in a future investigation into the root cause of a data discrepancy. The best practice for these types of discrepancies is to run an A/A test, make adjustments and run another A/A test to verify that the issue was corrected.

Bots

Optimizely Experimentation offer bot filtering for results.

Your analytics platform may not filter out bots, making your results vary. Make sure you are looking at the same user values between Optimizely Web Experimentation, Optimizely Feature Experimentation, and your analytics platform.

Content blockers

Many internet users use content blockers such as Adblock or Ghostery to block trackers and advertisers. These content blockers can also block client-side trackers, including Optimizely Web Experimentation.

Content blockers cannot block any server-side experimentation you use with Optimizely Feature Experimentation. Because the impression event and tracking happen in the backend, the user's client-side content blocking will not be effective. This could be why Feature Experimentation counts more users than a third-party platform. It is valuable to know what portion of your end users have content blocking extensions enabled.

Timing

A difference in counts could be due to the relative distance between the points in time when Experimentation counts the user and when the other platform counts the user. During the page load process, as time elapses, the user's web browser requests resources on the page and sends information to various providers. Eventually, when all requests are completed and your browser renders the page with its images, the page completely finishes loading.

The best practice for Optimizely Web Experimentation is to have your snippet be one of the first resources on your page (near the top of the <head> tag) and to have it load synchronously (blocking). This helps ensure that the visual edits you make in your A/B tests are ready to display before the content exists and prevents the original version of the page from showing before the variation renders (a flash of original content).

mceclip0.png

Because Optimizely Experimentation is the first thing to run, A/B tests activate and get ready for page targeting in the order of activation. Events are sent so Optimizely Experimentation makes asynchronous requests to fire impression events and page views.

If you are running Optimizely Feature Experimentation on your web server, you may have events fire before the server responds to the user's browser before any of the resources start to be downloaded.

General analytics scripts do not need to be one of the first things to load. In fact, Google Analytics recommends implementing after the <head> tag. This means that Optimizely Experimentation counts users a considerable amount of time before most third-party analytics. Because a duration exists between points in time, an opportunity for users to close the browser, drop internet connection or click back and not be counted by the latter platform.

Because of this duration where Optimizely Experimentation has counted a user, but other services have not, users may close the browser tab, bounce, or lose internet connection. This usually manifests as Optimizely Experimentation having higher counts than other platforms.

Adjust timing in Optimizely Web Experimentation

You can choose when Optimizely Web Experimentation should send its events, even with the snippet being one of the first things to load. The default behavior is for Optimizely Web Experimentation to send events as soon as possible, but we have a feature for Optimizely Web Experimentation users who want to align their data with another platform and may find it worth delaying Optimizely Web Experimentation's events.

Two APIs address this: holdEvents() and sendEvents()

  • holdEvents() – when called, all subsequent events are kept in a queue. Use this before Optimizely Web Experimentation to hold all events.

    Discrepencies-1.png

  • sendEvents() – when called, the queue created by holdEvents() is sent in a batch as one network request. 

    Discrepencies-2.png

You can deploy these APIs in Project JavaScript or in an external script. Make sure to use sendEvents() around the same time as when your other platform fires its impression event.

Adjust timing in Optimizely Feature Experimentation

With Optimizely Feature Experimentation running in a web server, it might not even be a browser that is making the request. If you have a service that requests the page, make sure that this does not trigger an A/B test activation. Since these types of requests do not end up rendering the page and running the JavaScript on the page, Optimizely Feature Experimentation can track the activation when other analytics platforms can.

Disparities may be the most exaggerated when comparing Optimizely Feature Experimentation results to the client-side after the <head> implemented analytics because that setup has the largest delay between when the user is counted on each platform.

Discrepencies-3.png

Discrepencies-4.png

Optimizely Feature Experimentation gives you control of when to send the impression event. In legacy projects (created before February 2021):

  • Call activate() to send the impression event and return the variation.
  • Call getVariation() to return the variation only.

If your activations happen too soon for your results to align, move the activation into the front-end by adding the JavaScript SDK and use activate() at the end, just before your other analytic platform's JavaScript.

In Optimizely Feature Experimentation experiments, you can use event batching to batch impressions and conversion events into a single payload before sending it to Optimizely Feature Experimentation.

In an Optimizely Feature Experimentation experiment running on a web server, you will want to get each user’s variation when the user makes a request for the page. You will use the variation key to respond to the request with one page or another page with edits to it.

Get support

When you submit support requests for data discrepancies and integrations, provide details including screenshots and code samples. If you are having an issue with a custom integration, send Optimizely every detail of how the integration is set up and what numbers you compare to Optimizely Experimentation's data.

Optimizely cannot investigate experiment results older than 180 days due to internal data retention policies. Additionally, starting October 2022, experiment result data is only retained for 18 months. See the data retention policy.

First-party integrations

File a Support ticket to get help with implementation and result discrepancies for integrations that Optimizely Web Experimentation offers.

Custom analytics integrations

Custom analytics integrations are a developer API offered and supported by Optimizely for the Optimizely Web Experimentation and Optimizely Personalization products. Although Optimizely cannot fully support results discrepancies that may occur using these integrations, Optimizely Support can look at your implementation and provide guidance using Optimizely's APIs. 

Third-party integrations

Many of Optimizely partners offer integrations with Optimizely Experimentation. These integrations are supported by the partners who develop them. Contact the partner’s support team for assistance.