This topic describes how to:
- Identify discrepancies in third-party data
- Implement best practices to avoid data discrepancies
Optimizely Experimentation offers many out-of-the-box analytics integrations in addition to developer tools that enable teams to use custom integrations and has many partners offering third-party integrations. With all of the analytics platforms you can use to measure the impact of your A/B tests, you will quickly find out that your numbers will not match perfectly from one platform to the next.
If you are seeing a discrepancy in your data, it is important to make sure you are following some best practices. These are to ensure that any involved platforms or datasets are measuring the same thing. If your datasets are not based on the same information, user set, or activity, then they will probably not align.
Common causes
The vast majority of data discrepancies that we have seen came down to root causes covered in this section. If you are not comparing the same data, it will probably not match up. Below are some best practices to follow to ensure that your data can be compared to Optimizely Experimentation results. You may find that you can get the results to better align just by making adjustments to your reports.
Not integrated
If you do not have an integration between Optimizely Experimentation and your other platforms, there should be no expectation that they will match. Users need to be tagged with Optimizely Experimentation experiment and variation information when they are exposed to an A/B test. Your other platforms’ data should be filtered to only show that which contains experiment and variation information.
We offer many out-of-the-box integrations that can be enabled easily. If there is no integration for the platform you want to integrate with, you or your developers can create a custom analytics integration following our Integrations Developer Guide.
Optimizely Feature Experimentation has some integrations with analytics providers. Read our Set up analytics platforms developer documentation for some ideas for integrating your platforms.
Filters
Optimizely Experimentation and other platforms have many options for results segmentation. Drill your results down to different categories of users and attributes such as web browser, location, language, and plan type. Narrow reports to a specific date and time range and filter out results from certain IPs. Check the following for consistency from platform to platform:
-
User attributes
-
Date/time range
-
IP filtering
-
Bot filtering
User scope
Optimizely Web Experimentation and Optimizely Feature Experimentation calculate results based on unique user counts, whereas Optimizely Web Personalization computes based on unique sessions. Read more about how Optimizely Experimentation counts conversions. Other platforms may count results differently, resulting in different counts for users who are tagged as having seen an Optimizely Experimentation experiment. It is important to make sure you understand how each platform counts so you can account for any differences.
For example, Optimizely Experimentation and Google Analytics use a different “visitor” definition.
-
Google Analytics uses a tracking call that is session-based, meaning a single visitor can trigger multiple visits over a given period of time (GA Support Article)
-
Optimizely Experimentation, on the other hand, uses a cookie with a 6-month expiration time (from the last visit) and counts unique visitors.
Events
If you have similar visitor counts but different conversion counts for an event, you will want to look closely at the event on each platform. Two similarly named events are not necessarily tracking the exact same action taken by visitors.
For example, you may have a “signup completed” event tracking a form submission. In Optimizely Experimentation, you may have this configured as a “submit” button click metric or a confirmation page view metric. Each of these events represents the same thing (the user submitted the form), but they are not tracking the same action, which can lead to a discrepancy. The user may have clicked the submit button without filling out the form, or maybe the user submitted the form by pressing “enter”, bypassing the button click event.
Check with your engineers to see how each event is being tracked. Make sure your events track the same things the same way. If they don’t, you will have dissimilar data.
Audience
In Optimizely Experimentation, a user's actions only count toward A/B tests for which they still meet the audience conditions. If your other analytics platform continues to track users after they no longer meet the audience condition, your Optimizely Experimentation results page may show lower total conversions.
Attribution
Optimizely Experimentation’s attribution model may differ from those of other analytics platforms.
Optimizely Experimentation has decision-first counting, which means that we only count conversions from users who have previously sent us a decision event. When an A/B test activates, the decision event fires. If a conversion event fires before the decision, it would not be counted on the results page but may be counted in your other analytics platform.
Segmentation values may have differing attribution as well. Optimizely Experimentation attributes segment values at the user level using the most recent value for each segment in each session.
Other discrepancies
Discrepancies caused by differences in event tracking probably cannot be aligned without making adjustments and running a new experiment. A new experiment can track new information about users that might aid in a future investigation into the root cause of a data discrepancy. The best practice for these types of discrepancies is to run an A/A test, make adjustments and run another A/A test to verify that the issue was corrected.
Bots
Optimizely Web Experimentation and Optimizely Feature Experimentation offer bot filtering for results. Your analytics platform may not filter out bots, making your results vary. Make sure you are looking at the same user values between Optimizely Web Experimentation, Optimizely Feature Experimentation, and your analytics platform.
Content blockers
Many internet users use content blockers such as Adblock or Ghostery to block trackers and advertisers. These content blockers can block any client-side trackers, including Optimizely Web Experimentation.
Content blockers cannot, however, block any server-side experimentation you do using Optimizely Feature Experimentation. Because the impression event and tracking happen in the backend, the user's client-side content blocking will not be effective. This could be why Full Stack counts more users than a third-party platform. It is valuable to know what portion of your end users have content blocking extensions enabled.
Timing
A difference in counts could be due to the relative distance between the points in time when Experimentation counts the user and when the other platform counts the user. During the page load process, as time elapses, the user's web browser requests resources on the page and sends information to various providers. Eventually, when all requests are completed, and your browser renders the page with all of its images, the page will be completely finished loading.
The best practice for Optimizely Web Experimentation is to have your snippet be one of the first resources on your page (near the top of the <head>
tag) and to have it load synchronously (blocking). This helps ensure that the visual edits you make in your A/B tests are ready to display before the content exists and prevents the original version of the page from showing before the variation renders (a flash of original content).
Since Optimizely Experimentation is the first thing to run, we activate A/B tests and get ready for page targeting in the order of activation. We send events as soon as possible, so Optimizely Experimentation will begin making asynchronous requests to fire impression events and page views. If you are running Optimizely Feature Experimentation on your web server, you may have events fire before the server responds to the user's browser before any of the resources start to be downloaded.
General analytics scripts do not need to be one of the first things to load. In fact, Google Analytics recommends implementing after the <head>
tag. This means that Optimizely Experimentation counts users a considerable amount of time before most third-party analytics. Because a duration exists between points in time, an opportunity for users to close the browser, drop internet connection or click back and not be counted by the latter platform.
Because of this duration where Optimizely Experimentation has counted a user, but other services have not, users may close the browser tab, bounce or lose internet connection. This usually manifests as Optimizely Experimentation having higher counts than other platforms.
Adjust timing in Optimizely Web Experimentation
You can choose when Optimizely Web Experimentation should send its events, even with the snippet being one of the first things to load. The default behavior is for Optimizely Web Experimentation to send events as soon as possible, but we have a feature for Optimizely Web Experimentation users who want to align their data with another platform and may find it worth delaying Optimizely Web Experimentation's events.
We have two APIs to address this: holdEvents() and sendEvents()
-
holdEvents()
– when called, all subsequent events are kept in a queue. Use this before Optimizely Web Experimentation to hold all events. -
sendEvents()
– when called, the queue created by holdEvents() is sent in a batch as one network request.
You or your developers can deploy these APIs in Project JavaScript or in an external script. Make sure to use sendEvents()
around the same time as when your other platform fires its impression event.
Adjust timing in Optimizely Feature Experimentation
With Optimizely Feature Experimentation running in a web server, it might not even be a browser that is making the request. If you have a service that requests the page, make sure that this does not trigger an A/B test activation. Since these types of requests do not end up rendering the page and running the JavaScript on the page, Optimizely Feature Experimentation can track the activation when other analytics platforms can.
Disparities may be the most exaggerated when comparing Optimizely Feature Experimentation results to the client-side after the <head>
implemented analytics because that setup has the largest delay between when the user is counted on each platform.
Optimizely Feature Experimentation gives you complete control of when to send the impression event. In legacy projects (created before February 2021) you can call activate() which will send the impression event and return the variation, or you can getVariation() which will just return the variation. If your activations happen too soon for your results to align, move the activation into the front-end by adding the JavaScript SDK and use activate()
at the end, just before your other analytic platform's JavaScript.
In Optimizely Feature Experimentation experiments, you can use event batching to batch impressions and conversion events into a single payload before sending it to Optimizely Feature Experimentation.
In an Optimizely Feature Experimentation experiment running on a web server, you will want to get each user’s variation when the user makes a request for the page. You will use the variation key to respond to the request with one page or another page with edits to it.
Get support
We want to make sure you have the help that you need in case the tips above do not point you to anything conclusive. There are three types of integrations you can have with Optimizely Experimentation, so to get the best support, please keep reading. When submitting support requests for data discrepancies and integrations, please provide as much detail as possible, including screenshots and code samples. If you are having an issue with a custom integration, we will need to know every detail of how the integration is set up and what numbers you compare to Optimizely Experimentation's data.
We may not be able to investigate experiment results older than 180 days due to our internal data retention policies. Additionally, starting October 2022, experiment result data is only retained for 18 months. Read our data retention policy for more information.
First-party integrations
Integrations that Optimizely Web Experimentation offers out of the box that are supported by Optimizely Support. Please file a support ticket to get help with implementation and result discrepancies.
Custom analytics integrations
Custom analytics integrations are a developer API offered and supported by Optimizely for the Optimizely Web Experimentation and Optimizely Web Personalization products. Although we are not able to fully support results discrepancies that may occur using these integrations, we are happy to take a look at your implementation and provide guidance using our APIs.
Third-party integrations
Many of our partners offer integrations with Optimizely Experimentation. These integrations are supported by the partners who develop them. Please reach out to the partner’s support team for assistance.