You can maximize engagement by testing multiple versions of content against the campaign target.
What to test
You can test any aspect of the content with up to five variations. We recommend only using two or three variations to encourage more conclusive results. Things that you might consider changing in the variants include:
- Subject line – Changing the length, tone, or level of personalization.
- Sending profile – Sending the email from an employee email address instead of a generic company or department address.
- Offer – Leveraging different offer types, such as a whitepaper versus a video.
- Format – Experimenting with the content's formatting, such as paragraphs or bullet points.
- Image – Using different pictures or graphics to see if they have any effect on engagement.
Configure an A/B test
- Go to Campaigns.
- Select or create a One-Time, Behavioral, or Transactional campaign.
- Select a campaign touchpoint to access the editor.
- Click +Add Variation in the editor.
The A/B testing panel appears after adding a variation.
Use the following sections to configure options for the test.
Test type
There are currently two testing methods available:
Automatic – ODP sends different versions of content to a small sample of the campaign's segment and determines a winner based on preselected criteria. The winning content sends to all remaining customers.
Manual – ODP sends the content to the percentages of the segment specified with the total adding up to 100%. The touchpoints are a single phase with no secondary winner phase. ODP compares the two versions and determines the winner. If this option is selected, you can only configure the variation slider.
Test duration (automatic tests only)
This setting determines the length of the testing period. The percentages assigned to each variant determine the share of emails sent at the campaign start time. After the test duration, ODP determines the winner and sends the remainder of the segment the winning content.
You must allow enough time for your customers to act on the campaign (open and click emails) to determine clear winner. We recommend a duration of at least 4 hours.
When testing recurring campaigns, ODP test all emails from the start time through the duration of the test and determines the winning content to use for subsequent campaign runs.
Win criteria (automatic tests only)
The test will be evaluated based on the winning criteria selection:
- Opens – The count of unique recipients who have opened the email, divided by total sends.
- Clicks – The count of unique recipients who have clicked in the email, divided by total sends.
- Click Rate of Opens – The count of unique recipients who have clicked in the email, divided by the count of unique recipients who have opened the email.
Depending on your content changes, one metric might be more relevant than another. For example, you may want to use opens if you have modified the subject line and preheader. You may want to use the click rate of opens for changes to the email's body or offer.
Default winner (automatic tests only)
A variation can only be the winner if the difference in winning criteria values for the changes is statistically significant. When a test is inconclusive, the remainder of the segment becomes the default winner instead.
For a test between two pieces of content, statistical significance is determined by calculating a Z-score of the two proportions of the test group that match the winning criteria. More specifically, Message A is sent to na recipients in the testing period with the fraction pa matching the win criteria, and Message B is sent to nb recipients in the testing period with the fraction pb matching the win criterion. The Z-score calculates the confidence that pa and pb represent the true difference in outcomes matching the winning criteria, within some margin of error, as opposed to representing a chance outcome. A Z-value of 1, or a 68% confidence level, must occur for a statistically significant win.
If a test has more than two variants, the results from each piece of content are compared against each other. The winner is statistically significant if it is statistically significant against all other content tested.
Variation Slider
For automatic tests - The slider indicates the percentages of the campaign segment to send each variation during the testing period. For a one-time automatic test, we recommend 10% for each variant to increase the likelihood of the test result being statistically significant. For a small segment (below 100k), consider increasing this percentage to 20-25%. You should try to use equal percentages for each piece of content.
For an automatic test on a recurring campaign, ODP tests email sends during your testing period at 100%, with the ability to shift what percentage of enrolled customers receives each variation.
For manual tests - ODP targets 100% of your segment and allows you to change the breakdown within. You should try to use equal percentages for each piece of content.
Considerations
A/B testing is not available in every situation:
- A/B testing is not available for API triggered push campaigns.
- Only manual testing is available for event-triggered campaigns.
The campaign audience is calculated differently based on the type of A/B test. The campaign segment is determined at the campaign start time (when the testing period starts) and again at the time of the winning phase. For example, if customer A is not in the campaign segment at the campaign start time but is in the segment at the time of the winning phase, they become a target in the winning phase.
Review results
- Go to Campaigns.
- Select the A/B tested campaign you wish to review.
- Select a campaign touchpoint that has been tested to access its performance metrics.
For automatic tests – If you used an automatic test, you can compare the results side by side. A label specifies whether or not a winner was determined during the testing period. The variant has a Winner label if it was statistically significant.
A variant has a Default Winner label if ODP could not determine a clear winner during the testing period.
For manual tests - If you used a manual test, you can compare the results side by side. Winner labels are not used because ODP did not select a winner. Instead, the relative performance of the variants determine your winner and any takeaways for future campaigns.