- Optimizely Web Experimentation
- Optimizely Performance Edge
Optimizely Experimentation makes it easy to A/B test your website. Use the experimentation editor to design variations, define where the experiment runs and who sees it, create events to measure success using metrics, and test your experiment. When your experiment works the way you want, publish it to the world.
Follow these steps to create, configure, and launch an experiment in Optimizely Web Experimentation or Optimizely Performance Edge. See Optimizely Web Experimentation implementation checklist.
Optimizely Performance Edge is a lightweight experimentation product that delivers significantly faster performance than other versions of Optimizely. It does this by relying on a streamlined "microsnippet" which limits the range of available features. Use the links in each section to determine what features are available for Performance Edge experiments.
Create an experiment
- Go to the Experiments or Optimizations dashboard.
- Click Create New Experiment.
- Select A/B Test from the dropdown.
- Enter a name and description (optional) for your new experiment.
- Set where your experiment runs. Choose to Target By URL to run an experiment once. Choose to Target By Saved Pages to run experiments on a set of URLs regularly. You can use a combination of page triggers and conditions to tell when to activate a page. These options are useful for single-page applications (SPAs). In Optimizely Performance Edge experiments, pages activate immediately. See set up a Page.
- Click Create Experiment.
Before you can run an experiment, implement the one-line JavaScript snippet in your website's code and create at least one metric. If your Optimizely account uses custom snippets, you might see more than one snippet listed. In this case, choose the snippet you want for your experiment.
Set your target
Set the conditions for your targeting. Although you chose how to target when creating the experiment, you can set triggers, conditions, and test URLs for the activation. See Target URLs to choose where your experiment runs for information.
You can also choose your audience for the experiment. Audiences let you decide who sees your experiment.
The default audience is all visitors to your site (Everyone). Click Search and add audiences to create an audience or add any existing audiences to your experiment.
You can combine multiple audiences in your experiment using AND and OR conditions. See Target audiences using the Audience Builder for advanced audience configurations.
Design your experiment
Variations
Click Design > Variations to create and edit variations for your experiment.
Selecting a variation opens the Visual Editor, which lets you make the following changes to your webpage:
- Change the layout, including the element's visibility or position. Rearrange the elements on the page.
- Modify or replace typography, images, background styles, or borders.
- Add inline CSS.
- Fine-tune jQuery selectors.
- Change the timing of the change from synchronous to asynchronous.
Shared Code
You can apply code that is shared among all variations in the experiment. See Shared code for information.
Traffic Allocation
Set traffic allocation to specify how traffic is split between your variations. Optimizely Experimentation randomly allocates traffic into different variations, including the original.
You can change the traffic volume for each variation, if desired. You can also change the total traffic that goes into the experiment as a whole.
Track your experiment
Metrics
After you design your experiment, you must define how to measure success using metrics.
If your account has metrics hub enabled, you can select from pre-defined, reusable metrics that were created in advance. Metrics hub lets you manage metrics across projects and experiments, making it easier to maintain consistency across your organization. You can alternatively create a one-time metric that creates a metric for a single use within an experiment and is not reusable.
If you do not have metrics hub, you can create new metrics directly in the experiment using the following instructions:
Go to Track > Metrics. Click Add Metric and choose your metric type and event.
You can also use the Sample Size Calculator to determine how many visitors you may need for Optimizely's Stats Engine to detect a significant result. See Use minimum detectable effect when designing an experiment for information.
The first metric you add is the primary metric for the experiment. This metric determines whether your experiment "wins" or "loses" and should track an event that is directly affected by the changes you make in your experiment. You can also add secondary metrics or monitor goals to measure the downstream effects of your experiment.
To complete the experiment configuration, you must add at least one metric. You can change, remove, or add metrics to your experiment later. Click Save when you are finished adding metrics to your experiment.
Integrations
You can select integrations for your experiment, such as Google Analytics 4. Click Project Integration Settings to enable integrations for your project.
Schedule your experiment
Go to Plan to schedule your experiment if you want to automate it. You can select the Start Time, End Time, and Time Zone. Click Save.
Test and publish
After you configure the components of your experiment, preview it to ensure it looks and works the way you intend.
Click the Preview icon to view visual changes for your variation.
See Preview and publish your experiment on how to test the functionality of your campaign.
If you do not see your changes in Preview mode, ensure your snippet is implemented on this page and configured to include Optimizely Web Experimentation or Optimizely Performance Edge.
See Test Optimizely Experimentation for what you need to consider before fully testing your experiment, or see Troubleshoot experiments.
When everything looks and works as you want it to, click Start Experiment to deploy the experiment live to your webpage visitors.
View a summary of the experiment
In addition to previewing and testing your experiment, use the Summary page in an experiment to view a high-level summary of the experiment settings. You can download and share this summary with stakeholders for easier reporting on the experiments you are running.
Please sign in to leave a comment.