This topic describes how to:
- Define how monthly active users (MAUs) are counted from a technical perspective
- Distinguish between monthly active users (MAUs) and impressions
- Forecast monthly active users and view monthly active users consumption
Starting September 2020, we introduced a simplified usage billing component: Monthly Active Users (MAUs), which replaces impressions.
Your Monthly Active Users (MAUs) is the number of unique users used in a decision or tracking event. It is a measure of the overall traffic where you are using the snippet, APIs, or SDKs to do something, specifically:
- Experiment Evaluation
- Personalization Campaign Evaluation
- Feature Flag/Rollout Evaluation
- Event Tracking
Unlike impressions, it is not a measure of what percent of that traffic you are experimenting on - every user that gets evaluated is counted. This allows you to run large-scale experiments at 100% traffic to reach statistical significance more quickly.
This article provides an overview of how MAUs work at Optimizely Experimentation from a technical perspective.
Optimizely Feature Experimentation
In Optimizely Feature Experimentation, a monthly active user each time a decision or tracking event is sent for a unique user ID:
- When a Decide method is called, and a decision event (aka impression) is triggered
- When the Track Event method is called and a conversion event is triggered
Users are counted even if they receive a disabled flag due to a Decide method because a decision event was still sent.
Optimizely Feature Experimentation pre Flags
In Optimizely Feature Experimentation pre Flags, a monthly active user is counted each time a decision or tracking event is sent for a unique user ID:
optimizelyClientInstance.activate()method is used and visitor is bucketed into variation of an a/b test (experiment evaluation).
optimizelyClientInstance.isFeatureEnabled()method is used and the visitor is bucketed into a feature test (feature flag/rollout evaluation).
optimizelyClientInstance.track()method is used (tracking event).
Users are counted even if they receive a disabled flag due to
isFeatureEnabled() because a decision event was made.
Optimizely Web Experimentation
As long as it is the same unique user ID, Optimizely Web Experimentation deduplicates. For example, if a user goes to a page with one project's snippet and then a page with another project's snippet, only one MAU is counted as long as it is the same unique user ID.
If you use both Optimizely Web Experimentation and Optimizely Feature Experimentation, you can override anonymous Optimizely Web Experimentation user IDs with known Optimizely Feature Experimentation user IDs to avoid overcounting. For more information on including Web IDs, see our documentation on Bring your own ID (BYOID) in Optimizely Web. Reach out to your Optimizely Technical Support Engineer for more information.
Each time a Optimizely Feature Experimentation experiment or a page within a Optimizely Web Experimentation experiment is activated, a decision request is sent. Decision requests look like this:
In the request payload, the decision attribute indicates the experiment that it applies to.
Let us walk through an example scenario. There are three multipliers:
Pages (as defined in Optimizely Experimentation)
Imagine that your company, Attic and Button, is experimenting on www.atticandbutton.us. Consider a visitor who starts by visiting the Attic and Button homepage, where there are three experiments running. One of these experiments has two Optimizely Web Experimentation pages that both target the homepage:
Homepage: https://www.atticandbutton.us/ (simple match)
Homepage: https://www.atticandbutton.us/ (simple match)
Three bucketing decisions are made on whether to include that user in the experiment or not, but since the user ID is unique, this will only count towards one monthly active user (MAU). This is in contrast to impressions, which would have counted four impressions for this one user.
If the visitor refreshes the page, they will still only count as one monthly active user. This is in contrast to impressions, which would have generated another four impressions, making the total eight impressions for this one particular user.
Now, suppose that you are running a search algorithm experiment with Optimizely Feature Experimentation on the homepage too. When a visitor types a search term, the results are refreshed without reloading the page. The Optimizely Feature Experimentation SDK makes a decision for a variation every time a new search is done. This means that if a visitor searches for "shirts," changes their search to "denim shirts," then changes their search again to "button down shirts," only one monthly active user would be counted since it is based on a unique user ID. This is in contrast to impressions, which would have counted three impressions, making the total usage count now 11 impressions.
Verifying monthly active users with results export
Optimizely Experimentation uses the server timestamp to calculate monthly active users, as opposed to the timestamp on the client device where the monthly active user originated. Doing so makes it possible to accurately verify monthly active users all the way down to the experiment level.
You can use Optimizely Experimentation's Enriched Events Export to get a complete list of all monthly active users that occurred within a specific time period. You can then compare that information to your invoice, or determine whether any of your experiments are generating more monthly active users than they should be. To learn how to access that data, check out our developer documentation article on data export services in Optimizely Experimentation.
How to Forecast MAUs
In order to forecast MAUs, we recommend you ask the following questions:
- Where will you be using Optimizely Experimentation?
These could be multiple channels, regions, business units, and so on (for example, Android, iOS, website, and so on).
- How many unique users/visitors do you have per month?
Include unique users/visitors, not page views. Excludes bot traffic. We recommend you get these numbers from internal analytics. Note: Provide the average across the year, note the peak.
- How complete is your Optimizely Experimentation implementation?
If the SDK/snippet runs for most users (for example, feature flags in-app navigation, or experiments on a product detail page layout), this should be 90-100%.
If Optimizely Experimentation is only used on a small part of the site (for example, personalizing a few landing pages, rolling out minor features only) this could be as low as 5-10%.
- How much annual growth do you expect?
Year over year, how much more traffic do you expect?
MAUs are a shared pool between Web and Full Stack products and roll over the next month if not used.