Impression in Optimizely Experimentation

  • Updated

This topic describes how to:

  • Define how impressions are counted from a technical perspective
  • Distinguish between monthly unique visitors (MUVs) and impressions
  • View impression consumption and decide when you need more impressions

Starting September 2020, Optimizely Experimentation has introduced a simplified usage billing component: Monthly Active Users (MAUs), which replaces impressions. 

Your Monthly Active Users (MAUs) is the number of unique users used in a decision or tracking event. It is a measure of the overall traffic where you are using the snippet, APIs, or SDKs to do something, specifically:

  • Experiment Evaluation
  • Personalization Campaign Evaluation
  • Feature Flag/Rollout Evaluation
  • Event Tracking

Unlike impressions, it is not a measure of what percent of that traffic you’re experimenting on - every user that gets evaluated is counted. This allows you to run large-scale experiments at 100% traffic in order to reach statistical significance more quickly.

This article provides an overview of how impressions work at Optimizely Experimentation from a technical perspective. This document is only preserved to distinguish between how Optimizely Experimentation counted billing previously and how Optimizely Experimentation now uses MAUs. Optimizely Experimentation’s approach to impressions is similar to that used by online media, but the nature of our technology introduces some key differences.

Optimizely Feature Experimentation

Optimizely Feature Experimentation is the latest version of Optimizely's full stack experimentation product. This has been the default project type since February 2021.

Refer to the developer documentation for information when impressions and decisions are counted for Optimizely Feature Experimentation.

Optimizely Full Stack Experimentation

Optimizely Full Stack Experimentation are legacy full stack experimentation projects created before February 2021.

In Optimizely Full Stack Experimentation, an impression is counted each time an experiment is activated and a decision event is sent:

  • When the optimizelyClientInstance.activate() or optimizelyClientInstance.isFeatureEnabled() method is used.

  • In iOS or Android SDK 1.x experiments, when true is passed with the activateExperiment argument in the live variable getter methods. Returning Feature variables with 2.x SDKs does not send a decision event.

Impressions are only counted for visitors who are bucketed into a variation in an experiment. The activate() call alone does not generate an impression.

Why?
 
Let us say that you are running an experiment on your homepage. In theory, every visitor to your homepage will trigger the activate() call. However, before an impression is created, the activate() method will do two things:
  1. Confirm that the visitor meets the specified audience conditions when setting up the experiment.
  2. Check the percentage of visitors you indicated should be included in the audience.

If the visitor meets the audience conditions, the activate() call will assign a variation for the visitor. An impression is counted only for visitors who receive one of these variation assignments.

If the visitor does not meet the audience conditions, the activate() call will assign a nil or NULL variation for the visitor, depending on the language. An impression is not counted for visitors who receive a nil or NULL variation assignment, which means these visitors do not count against your allotted number of impressions.

All rollouts are excluded from impression counts.

Optimizely Web Experimentation

In Optimizely Web Experimentation, we count an impression for every Optimizely Web Experimentation page, each time a visitor sees an Optimizely Web Experimentation experience as the result of a test or Optimizely Web Personalization campaign.

Impressions are the unit of measurement for usage in Optimizely Feature Experimentation and Optimizely Web Experimentation, but Optimizely Web Experimentation experiments include an extra layer: an experiment in Optimizely Web Experimentation can run on multiple Optimizely Web Experimentation pages. Every time a visitor activates a page within an Optimizely Web Experimentation experiment, Optimizely Web Experimentation counts an impression.

Sometimes, an experiment may have two pages that both target the same URL (this is common for single-page applications). When a visitor activates both pages during the same visit, this counts as two impressions.

Optimizely Web Experimentation processes impressions when they are sent, not necessarily when they occur. In practice, impressions are sent quickly after they are generated. However, in cases where something in your pipeline is preventing Optimizely Web Experimentation from sending impressions in a timely manner, they may show up on your bill long after the experiment that generated them has concluded. On your bill, these old impressions are not separated out to distinguish them from more current impressions.

Impressions are not counted for visitors bucketed into the holdback:

  • In an experiment with the traffic allocation set to less than 100%, some visitors will be in the holdback (isLayerHoldback will be set to true). These visitors won’t see the experiment, and no impressions are counted for these visitors.

  • In Optimizely Web Personalization with a holdback that’s greater than 0%, visitors in the holdback won’t see a personalized experience. No impressions are counted for visitors bucketed into the campaign holdback.

Decision event

Each time an Optimizely Web Experimentation experiment is activated, a decision request is sent. Decision requests look like this:

decisionrequest.png

In the request payload, the decision attribute indicates the experiment that it applies to.

Example

Let us walk through an example scenario. There are three multipliers:

  • Experiments

  • Pages (as defined in Optimizely Experimentation)

  • Pageviews

Imagine that your company, Attic and Button, is experimenting on www.atticandbutton.us. Consider a visitor who starts by visiting the Attic and Button homepage, where there are three experiments running. One of these experiments has two Optimizely Experimentation pages that both target the homepage:

Experiment 1

Experiment 2

Experiment 3

Since four page activations occur when the visitor views the homepage, the visit counts as four impressions. Impressions are shown in green below.

impressions_page_load.png

If the visitor refreshes the page, another four impressions are counted. Attic and Button’s account usage now totals eight impressions.

impressions_page_reload.png

Now, suppose that you are running a search algorithm experiment with Optimizely Feature Experimentation on the homepage too. When a visitor types a search term, the results are refreshed without reloading the page. The Optimizely Feature Experimentation SDK makes a decision for a variation every time a new search is done. This means that if a visitor searches for "shirts," changes their search to "denim shirts," then changes their search again to "button down shirts," another three impressions are counted. The total usage count is now 11 impressions.

impressions_search_fs.png

Verifying impressions with results export

Optimizely Experimentation uses the server timestamp to calculate impressions, as opposed to the timestamp on the client device where the impression originated. Doing so makes it possible to accurately verify impressions all the way down to the experiment level.

You can use Optimizely Experimentation's Enriched Events Export to get a complete list of all monthly active users that occurred within a specific time period. You can then compare that information to your invoice, or determine whether any of your experiments are generating more monthly active users than they should be. To learn how to access that data, check out our developer documentation article on data export services in Optimizely Experimentation.

How impressions are counted in billing

On June 23, 2020, Optimizely Experimentation began rolling out updated Impression logic; it will be in effect automatically for your accounts next billing month. With this new logic, we have introduced fixed interval bucketing which dedupes all impressions received within fixed 5-second intervals for every user in an experiment. This dedupe is based on their received timestamp of the event. This deduplication process is done by Optimizely Experimentation and will not require any action on your part to receive the benefit. Depending on your implementation this change will have either a neutral or positive impact on your Impression consumption, by reducing the impressions billed per experiment, allowing you to experiment more.

MUVs vs. impressions - Legacy

Optimizely Experimentation started billing customers based on impressions in 2017 to simplify billing and volume tracking. Starting February 1, 2018, no new subscriptions will use MUVs. However, some legacy subscriptions may still use MUVs.

To find out whether your subscription is billed based on MUVs or impressions, navigate to Account Settings > Plan. Your MUVs are listed under Monthly Usage Information.

monthly-usage-info.jpg

View the full definitions for monthly active users and impressions and check your specific Order Form with Optimizely Experimentation for details.

The advantages of impressions over MUVs include:

  • Impressions are sold in buckets.

  • Customers can purchase an annual allocation of impressions.

  • Impressions are a pure metric, with no deduplication like with MUVs and no requirement for a cookie to track visitors. In addition, impressions work very similarly across Optimizely Experimentation products.

  • The impression allocation is typically shared for all the Optimizely Experimentation products that a customer buys. For example, if a Optimizely Feature Experimentation and Optimizely Web Experimentation customer buys 25 million impressions for the year, it doesn’t matter whether they use those impressions for Optimizely Feature Experimentation or Optimizely Web Experimentation experiments.

  • Impressions align well with the online media models that our customers commonly use.