Configure A/B test campaign touchpoints

  • Updated

You can test multiple versions of your campaign content against your target customers in Optimizely Data Platform (ODP) to determine which version performs better. This approach can help you make data-driven decisions to optimize user experience and improve outcomes. 

What to test

You can test any part of the campaign content by creating up to five variations. You should use two to three variations to get more conclusive results. You can change the following:

  • Subject line – Change the length, tone, or level of personalization.
  • Sender's profile – Send the email from an employee email address instead of a generic company or department address.
  • Offer – Use different offer types, like a whitepaper or a video.
  • Format – Use different content formatting, like paragraphs or bullet points.
  • Image – Use different pictures or graphics to determine if they affect engagement.

Configure an A/B test

  1. Go to Activation > Overview > Create New Campaign.
  2. Select the campaign type (one-time, behavioral, or transactional).
  3. Select an existing campaign or click Create From Scratch.
  4. Expand Touchpoints and edit the touchpoint.
  5. Click Add Variation.
  6. Expand A/B Testing and select the test type (automatic or manual testing). 
  7. Set the test duration, win criteria, default winner, and variation slider for Automatic Testing. Set the variation slider only for Manual Testing.
  8. Design your campaign content with up to five variations. 
  9. Click Save

Test type

You can use the following two testing methods for your campaigns:

  • Automatic – ODP sends different content versions to a small segment sample and determines a winner based on preselected criteria. ODP sends the winning content to the remaining customers.
    This two-phased approach means that some customers receive content before others. A time-sensitive campaign is not a good fit for this testing method.
  • Manual – ODP sends the content to segment percentages, with the total adding up to 100%. The touchpoints are a single phase with no secondary winner phase. ODP compares the two versions and determines the winner. If this option is selected, you can only configure the variation slider. 

Test duration

This setting determines the length of the testing period. The percentages assigned to each variant determine the share of emails sent at the campaign start time. After the testing period, ODP determines the winner and sends the winning content to the remainder of the segment. 

You must permit enough time for your customers to act on the campaign (open and click emails) to determine a clear winner. The duration should be at least four hours.

When testing recurring campaigns, ODP tests all emails from the start time through the duration of the test and determines the winning content to use for subsequent campaign runs.

Win criteria

ODP evaluates the test based on the following winning criteria: 

  • Opens – The count of unique recipients who have opened the email, divided by total sends.
  • Clicks – The count of unique recipients who clicked on the email, divided by total sends.
  • Click Rate of Opens – The count of unique recipients who have clicked on the email, divided by the count of unique recipients who have opened the email.

Depending on your content changes, one metric might be more relevant. For example, you may want to use Opens if you have modified the subject line and preheader, and you may want to use the click rate of Opens for changes to the email's body or offer.

Default winner 

A variation can only be the winner if the difference between winning criteria values for the changes is statistically significant. When a test is inconclusive, the remainder of the segment becomes the default winner. 

For a test between two pieces of content, ODP determines statistical significance by calculating a Z-score of the two proportions of the test group that match the winning criteria. More specifically,

  • Message A is sent to na recipients during the testing period, with the fraction pa matching the win criteria.
  • Message B is sent to nb recipients during the testing period, with the fraction pb matching the win criteria.

The Z-score calculates the confidence that pa and pb represent the true difference in outcomes matching the winning criteria within some margin of error instead of representing a chance outcome. For a statistically significant win, a Z-value of 1 or a 68% confidence level must occur.

If a test has more than two variants, ODP compares the results from each piece of content against each other. The winner is deemed statistically significant by ODP if it outperforms all other tested content in statistical significance.

Variation Slider

  • For automatic tests – The slider indicates the percentages of the campaign segment that each variation receives during the testing period. You should set a 10% percentage for a one-time automatic test to increase the likelihood of the test result being statistically significant. For a small segment (<100k), consider increasing this percentage to 20-25%. Use equal percentages for each variation. For automatic testing on recurring campaigns, ODP tests email sends during your testing period at 100%, with the ability to shift the percentage of enrolled customers who receive each variation.
  • For manual tests – ODP targets 100% of your segment and lets you change the breakdown. Use equal percentages for each piece of content.

Consider exceptions

A/B testing is not available or partially available in the following scenarios:

  • A/B testing is not available for API-triggered push campaigns.
  • Only manual testing is available for event-triggered campaigns.

The campaign audience is calculated differently based on the type of A/B test. ODP determines the campaign segment when the testing period starts and again during the winning phase. For example, if customer A is not in the campaign segment at the campaign start time but is in the segment during the winning phase, they become a target in the winning phase.

Review results

  1. Go to Activations > Overview.
  2. Select the A/B tested campaign you want to review.
  3. Select a campaign touchpoint that ODP tested to access its performance metrics.
    • For automatic tests – You can compare the results side by side. A label specifies whether or not ODP determined a winner during the testing period. The variant has a Winner label if it is statistically significant. A variant has a Default Winner label if ODP does not determine a clear winner.
      Winner_selected.png
      Default_winner_selected.png
  • For manual tests – You can compare the results side by side. The touchpoint has no winner labels because ODP did not select a winner. Instead, the relative performance of the variants determines your winner and any takeaways for future campaigns.
    Manual_winner.png