- Optimizely Feature Experimentation
If you are new to experimentation, you can get a lot done with a simple ON or OFF A/B test. This configuration has one flag with two variations:
- One "flag_on" variation.
- One "flag_off" variation.
Restrictions
If you intend on having multiple experiments and flag deliveries (targeted deliveries) in your flag, your experiment must always be the first rule in your ruleset for a flag.
Configuration overview
To configure a basic A/B test, complete the following steps:
- (Prerequisite) Create a flag.
- (Prerequisite) Handle user IDs.
- Create an A/B Test rule in the Optimizely app.
- Configure the A/B Test rule.
-
Integrate the example
decidecode that the Optimizely Feature Experimentation app generates with your application. - Start your A/B rule and flag (ruleset)
- Test your experiment in a non-production environment. See QA and troubleshoot for information.
- Discard any test user events and enable your experiment in a production environment.
Create an experiment
Create an A/B Test rule
- Select your flag from the Flags list.
- Select the environment you want to target.
- Click Add Rule.
-
Select A/B Test.
Configure your A/B test rule
- Enter a Name
- The Key is automatically created based on the Name. You can optionally update it.
- (Optional) Click Add description to add a description. You should add your hypothesis for your A/B test rule as the description.
- (Optional) Search for and add audiences. To create an audience, see Target audiences. Audiences evaluate in the order in which you drag and drop them. You can choose whether to match each user on any or all of the audience conditions.
- Set the Traffic Allocation to assign a percentage of your audience to bucket into the experiment.
If you plan to change the traffic's Traffic Allocation after running the experiment or select Stats Accelerator for the Distribution Mode, you need to implement a user profile service before starting the experiment.
For information, see Ensure consistent user bucketing using a user profile service.
- Add Metrics based on tracked user events. See Manage events in Feature Experimentation for information on how to create events using the UI or Create events for how to use the Feature Experimentation REST API.
- For information about selecting metrics, see Choose metrics in Feature Experimentation.
- For instructions on creating metrics, see Create a metric in Optimizely using the metric builder.
- Choose how your audience is distributed using Distribution Mode. Use the drop-down list to select one of the following:
- Manual – Variations are given equal traffic distribution. If you select this option, you can adjust the distribution percentages for each variation in the following step.
- Stats Accelerator – Manipulates the traffic distribution automatically to minimize time to statistical significance. For information, see Stats Accelerator.
- Choose the flag variations to compare in the experiment. See Create flag variations in the UI or Create flag variations using the REST API for information..
- For a basic experiment, you can include one variation in which your flag is on and one in which your flag is off.
- For a more advanced A/B/n experiment, create variations with multiple flag variables. See the Test with flag variables section.
-
(Optional) Click More options (...) for a variation and select Set as baseline. The baseline is the "original" or "control" variation against which Feature Experimentation compares the other variations to measure performance. If you do not select a baseline, Feature Experimentation arbitrarily selects a variation as the baseline.
After you set a baseline variation, to update it, complete the following:
-
Click More options (...) for the baseline variation and select Unset baseline to clear it.
-
Click More options (...) for a variation and select Set as baseline to replace the baseline variation with this variation.
-
- (Optional) Click Allowlist: Force up to 50 users into any variation(s) and enter the User ID. See Allowlisting.
- (Optional) Add the experiment to an Exclusion Group.
-
Click Save.
Run a pre-launch experiment review with Opal
Before implementing your A/B test rule, you can use Optimizely Opal to review your experiment configuration and recommend changes to maximize your odds of reaching statistical significance.
If you use Opti ID, administrators can turn off generative AI in the Opti ID Admin Center. See Turn generative AI off across Optimizely applications.
- Select your rule.
- Click Review Experiment.
Review and apply Opal's feedback before you run your A/B test rule to ensure the best results.
See Test Optimizely Experimentation for additional information on QA steps in Optimizely Experimentation.
Implement the experiment using the decide method
Flag is implemented in your code
If you have already implemented the flag using a Decide method, you do not need to take further action (Optimizely Feature Experimentation SDKs are designed so you can reuse the exact flag implementation for different flag rules).
Flag is not implemented in your code
If the flag is not implemented yet, see Run A/B tests in the developer documentation for code samples.
For more detailed examples of each SDK, see, the following:
- Android SDK example usage
- Go SDK example usage
- C# SDK example usage
- Flutter SDK example usage
- Java SDK example usage
- JavaScript SDK example usage – SDK versions 6.0.0 and later.
- Javascript (Browser) SDK example usage – SDK versions 5.3.5 and earlier.
- JavaScript (Node) SDK example usage – SDK versions 5.3.5 and earlier.
- PHP SDK example usage
- Python SDK example usage
- React SDK example usage
- React Native SDK example usage
- Ruby SDK example usage
- Swift SDK example usage
Adapt the integration code in your application. Based on the boolean value your application receives, show or hide the flag's functionality for a given user ID.
The goal of the Decide method is to separate the process of developing and releasing code from the decision to turn a flag on. The value this method returns is determined by your flag rules. For example, the method returns false if the current user is assigned to a control or "off" variation in an experiment.
Remember, a user evaluates each flag rule in an ordered ruleset before being bucketed into a given rule variation or not. See Interactions between flag rules for information.
Understand the Off variation
In Feature Experimentation, the Off variation is a unique state where you explicitly disable the feature for users. Unlike other variations (such as a, b, c, d) that typically return enabled: true, the Off variation always returns enabled: false.
However, this does not exclude users from tracking. Users assigned to the Off variation are still tracked, which you can verify through the ruleKey in the decision response.
Variation behaviors
Off variation
-
enabledresponse –false - Included in datafile – No
- Access method – REST API only
Baseline (non-Off) variation
-
enabledresponse –true - Included in datafile – Yes
- Access method – Datafile and REST API
The decision.enabled flag does not indicate whether a variation is a baseline. Baselines are not included in the datafile but are accessible through the REST API.
If you do not use the Off variation, decision.enabled always returns true. Conversely, if you assign the Off variation, it returns false.
The Off variation affects tracking and decision responses. Although Off is excluded from the datafile, it remains part of the experiment logic and is accessible through API calls.
Start your rule and flag (ruleset)
For information between the old and new flag management UI in Optimizely Feature Experimentation, see New flag and rule lifecycle management FAQs.
After creating your A/B test and implementing the experiment using the decide method, you can start your test.
-
Set your rule as Ready to run. Click Run on your A/B test rule.
-
Click Ok on the Ready to Run Status page. This alerts you that your rule is set to Ready to run until you start your ruleset (flag).
-
Click Run to start your flag and rule.
Congrats! Your A/B test is running.
Test with flag variables
When you have run a basic "on/off" A/B test, you can increase the power of your experiments by adding remote feature configurations or flag variables.
Flag variations let you avoid hard-coding variables in your application. Instead of updating the variables by deploying, you can edit them remotely in the Optimizely Feature Experimentation UI. See Flag variations and flag variables overview.
To configure an A/B test with multiple variations, complete the following:
- Create and configure a basic A/B test. See previous steps.
- Create flag variations containing multiple variables.
- Integrate the example code with your application.
Article is closed for comments.