You can maximize engagement by testing multiple versions of content against the campaign target in Optimizely Data Platform (ODP).
What to test
You can test any aspect of the content with up to five variations. Optimizely recommends only using two or three variations to encourage more conclusive results. Things that you might consider changing in the variants include:
- Subject line – Change the length, tone, or level of personalization.
- Sending profile – Send the email from an employee email address instead of a generic company or department address.
- Offer – Use different offer types, such as a whitepaper or a video.
- Format – Experiment with the content formatting, such as paragraphs or bullet points.
- Image – Use different pictures or graphics to determine if they have any effect on engagement.
Configure an A/B test
- Go to Activation > Engage > Create New Campaign.
- Select One-Time, Behavioral, or Transactional.
- Select an existing campaign or click Create From Scratch.
- Expand the Touchpoints section and edit the touchpoint.
- Click Add Variation.
- Expand the A/B Testing section that displays.
Use the following sections to configure options for the test.
Test type
There are currently two testing methods available:
-
Automatic – ODP sends different versions of content to a small sample of the campaign's segment and determines a winner based on preselected criteria. The winning content sends to all remaining customers.
This two-phased approach means that some customers receive content before others. A time-sensitive campaign is not a good fit for this testing method.
- Manual – ODP sends the content to the percentages of the segment specified with the total adding up to 100%. The touchpoints are a single phase with no secondary winner phase. ODP compares the two versions and determines the winner. If this option is selected, you can only configure the variation slider.
Test duration (automatic tests only)
This setting determines the length of the testing period. The percentages assigned to each variant determine the share of emails sent at the campaign start time. After the test duration, ODP determines the winner and sends the remainder of the segment the winning content.
You must allow enough time for your customers to act on the campaign (open and click emails) to determine a clear winner. Optimizely recommends a duration of at least 4 hours.
When testing recurring campaigns, ODP tests all emails from the start time through the duration of the test and determines the winning content to use for subsequent campaign runs.
Win criteria (automatic tests only)
ODP evaluates the test based on the winning criteria selection:
- Opens – The count of unique recipients who have opened the email, divided by total sends.
- Clicks – The count of unique recipients who have clicked in the email, divided by total sends.
- Click Rate of Opens – The count of unique recipients who have clicked in the email, divided by the count of unique recipients who have opened the email.
Depending on your content changes, one metric might be more relevant than another. For example, you may want to use opens if you have modified the subject line and preheader. You may want to use the click rate of opens for changes to the email's body or offer.
Default winner (automatic tests only)
A variation can only be the winner if the difference in winning criteria values for the changes is statistically significant. When a test is inconclusive, the remainder of the segment becomes the default winner instead.
For a test between two pieces of content, statistical significance is determined by calculating a Z-score of the two proportions of the test group that match the winning criteria. More specifically, Message A is sent to na recipients in the testing period with the fraction pa matching the win criteria, and Message B is sent to nb recipients in the testing period with the fraction pb matching the win criterion. The Z-score calculates the confidence that pa and pb represent the true difference in outcomes matching the winning criteria, within some margin of error, as opposed to representing a chance outcome. A Z-value of 1 or a 68% confidence level must occur for a statistically significant win.
If a test has more than two variants, ODP compares the results from each piece of content against each other. The winner is statistically significant if it is statistically significant against all other content tested.
Variation Slider
For automatic tests – The slider indicates the percentages of the campaign segment to send each variation during the testing period. Optimizely recommends 10% for a one-time automatic test to increase the likelihood of the test result being statistically significant. For a small segment (below 100k), consider increasing this percentage to 20-25%. You should use equal percentages for each variation.
For an automatic test on a recurring campaign, ODP tests email sends during your testing period at 100%, with the ability to shift what percentage of enrolled customers receives each variation.
For manual tests - ODP targets 100% of your segment and allows you to change the breakdown. Try to use equal percentages for each piece of content.
Considerations
A/B testing is not available in every situation:
- A/B testing is not available for API triggered push campaigns.
- Only manual testing is available for event-triggered campaigns.
The campaign audience is calculated differently based on the type of A/B test. ODP determines the campaign segment when the testing period starts and again at the time of the winning phase. For example, if customer A is not in the campaign segment at the campaign start time but is in the segment at the time of the winning phase, they become a target in the winning phase.
Review results
- Go to Activation > Engage.
- Select the A/B tested campaign you want to review.
- Select a campaign touchpoint that has been tested to access its performance metrics.
For automatic tests – You can compare the results side by side. A label specifies whether or not a winner was determined during the testing period. The variant has a Winner label if it was statistically significant.
A variant has a Default Winner label if ODP could not determine a clear winner during the testing period.
For manual tests – You can compare the results side by side. The touchpoint has no winner labels because ODP did not select a winner. Instead, the relative performance of the variants determine your winner and any takeaways for future campaigns.
Article is closed for comments.