Skip to content

Creating multivariate and A/B tests

You can create a Multivariate or A/B test for any campaign that targets a single channel.

Step 1: Create your campaign

Click Create Campaign and select a channel for the campaign from the section that allows multivariate and A/B testing. For detailed documentation on each messaging channel, refer to Create a Campaign.

Step 2: Compose your variants

You can create up to 8 variants of your message, differentiating between titles, content, images, and more. The number of differences between the messages determines whether this is a multivariate or A/B test. An A/B test examines the effect of changing one variable, whereas a multivariate test examines two or more.

For some ideas on how to get started differentiating your variants, refer to Tips for different channels.

Step 3: Schedule your campaign

Scheduling your multivariate campaign works the same as scheduling any other Braze campaign. All standard delivery types are available.

Once a multivariate test begins, you can’t make changes to the campaign. If you change the parameters, such as the subject line or HTML body, Braze will consider the experiment compromised and will immediately disable the experiment.

Step 4: Choose a segment and distribute your users across variants

Select segments to target, then distribute its members across your selected variants and the optional control group. For best practices around choosing a segment to test with, see Choosing a segment.

For push, email, and webhook campaigns scheduled to send once, you can also use an optimization. This will reserve a portion of your target audience from the A/B test and hold them for a second optimized send based on the results from the first test.

Control group

You can reserve a percentage of your target audience for a randomized control group. Users in the control group don’t receive the test, but Braze monitors their conversion rate for the duration of the campaign.

When viewing your results, you can compare the conversion rates of your variants against a baseline conversion rate provided by your control group. This lets you compare both the effects of your variants and the effects of your variants against the conversion rate that would result if you didn’t send a message at all.

A/B Testing panel that shows the percentage breakdown of the Control Group, Variant 1, Variant 2, and Variant 3 with 25% for each group.

Control groups with A/B testing

When using rate limiting with an A/B test, the rate limit isn’t applied to the control group in the same way as the test group, which is a potential source of time bias. Use appropriate conversion windows to avoid this bias.

Control groups with Intelligent Selection

The size of the control group for a campaign with Intelligent Selection is based on the number of variants. If each variant is sent to more than 20% of users, then the control group is 20% and the variants are split evenly across the remaining 80%. However, if you have enough variants that each variant is sent to less than 20% of users, then the control group must become smaller. When Intelligent Selection starts analyzing the performance of your test, the control group grows or shrinks based on the results.

Step 5: Designate a conversion event (optional)

Setting a conversion event for a campaign allows you to see how many recipients of that campaign performed a particular action after receiving it.

This only affects the test if you chose Primary Conversion Rate in the previous steps. For more information, refer to Conversion events.

Step 6: Review and launch

On the confirmation page, review the details of your multivariate campaign and launch the test! Next, learn how to understand your test results.

Things to know

Tips for different channels

Depending on which channel you select, you’ll be able to test different components of your message. Try to compose variants with an idea of what you want to test and what you hope to prove.

What levers do you have to pull and what are the desired effects? While there are millions of possibilities that you can investigate using a multivariate and A/B test, we have some suggestions to get you started:

In addition, the ideal length of your test may also vary depending on the channel. Keep in mind the average amount of time most users may need to engage with each channel.

For instance, if you’re testing a push, you may achieve significant results faster than when testing email, since users see pushes immediately, but it may be days before they see or open an email. If you’re testing in-app messages, keep in mind that users must open the app in order to see the campaign, so you should wait longer in order to collect results from both your most active app openers as well as your more typical users.

If you’re unsure how long your test should run for, the Intelligent Selection feature can be useful for finding a Winning Variant efficiently.

Choosing a segment

Since different segments of your users may respond differently to messaging, the success of a particular message says something about both the message itself and its target segment. Therefore, try to design a test with your target segment in mind.

For instance, while active users may have equal response rates to “This deal expires tomorrow!” and “This deal expires in 24 hours!”, users who haven’t opened the app for a week may be more responsive toward the latter wording since it creates a greater sense of urgency.

Additionally, when choosing which segment to run your test on, be sure to consider whether the size of that segment will be large enough for your test. In general, multivariate and A/B tests with more variants require a larger test group to achieve statistically significant results. This is because more variants will result in fewer users seeing each individual variant.

Bias and randomization

A common question with control and test group assignments is wondering if they can introduce bias to your testing. Others sometimes wonder how we know if these assignments are truly random.

Users are assigned to message variants, Canvas variants, or their respective control groups by concatenating their (randomly generated) user ID with the (randomly generated) campaign or Canvas ID, taking the modulus of that value with 100, and then ordering users into slices that correspond to the percentage assignments for variants and optional control chosen in the dashboard. So, there is no practical way that users’ behaviors prior to the creation of a particular campaign or Canvas could vary systematically between variants and control. It is also not practical to be any more random (or more accurately, pseudo-random) than this implementation.

Mistakes to avoid

There are some common mistakes to avoid creating the appearance of differences based on the messaging channel if audiences are not filtered correctly.

For example, if you send a push message to a wide audience with a control, the test group will only send messages to users with a push token. However, the control group will include both users who do have a push token and users that don’t. In this case, your initial audience for the campaign or Canvas must filter for having a push token (Push Enabled is true). The same must be done for eligibility to receive messages on other channels: opted in, has a push token, subscribed, etc.

HOW HELPFUL WAS THIS PAGE?
New Stuff!