A/B/n Testing

Overview

The Zeta Marketing Platform enables marketers with the ability to test subject lines and/or content both automatically and manually with up to ten variants.

Enabling A/B/n Testing

To enable A/B/n testing for your Broadcast campaign, click + Add an A/B test option under the Content & Audience tab. Keep in mind that while A/B/n testing is enabled, it will run for every instance of the campaign, including any recurring type of campaign. This applies to both manual and automated tests.

Managing Variations

Adding a Variation

To add an additional variation to the default A and B variations, click the + icon to the right of the A and B.

Removing a Variation

To remove a variation, hover your mouse over the variant that needs to be removed and click the red X icon that appears at the top right of the variant icon.

Editing a Variation

To edit a variation, click the variant icon that needs to be edited. From here, all content and header information can be modified as needed for the selected variation.

Disabling A/B/n Testing

To disable A/B/n testing, remove the variants by clicking the red 'X' icon at the top right of the variant the mouse is hovering over. Once all but one variants are removed, A/B/n testing will be disabled.

Configuring A/B/n Testing

Click Settings to the right of the A/B/n variants. A menu will slide in from the right side of the window for further configuration. There are two primary types of tests available; manual and automated. These options are available for all launch types. Keep in mind that while A/B/n testing is enabled, it will run for every instance of the campaign, including any recurring type of campaign. This applies to both manual and automated tests.

Manual Test

Manual tests are typically used to run a longer-term test across multiple campaigns, instances of a recurring campaign, and/or multiple audiences. This is useful when a user wants to maintain control over what the winner ultimately ends up being. Using a manual test the platform will not select a winner. The winner can be chosen by the user by duplicating the campaign used for testing and removing the variant(s) deemed a loser.

Select the appropriate option to optimize for from the first dropdown menu. Clicks is selected by default. The other option available is Opens. This option has no effect on the test.

Leave the Run test on sample audience option deselected. This is the setting that determines whether the test remains manual or automated.

Set the appropriate distribution for the test. The distribution should add up to 100%.

Automated Test

Automated tests are used to optimize the performance of a subject line or content for any given instance of a campaign. For file-based, API-based, or other recurring broadcasts, this means the test will run on a sample audience and choose a winner every time the campaign is deployed. Typically in these scenarios, a manual test is preferred instead of an automated test.

For adhoc broadcasts, automated tests can be used to let the platform decide the best subject line or content at that moment in time so a winner can be chosen automatically for the rest of that audience in that instance of the campaign. The winner is chosen using null-hypothesis statistical significance.

Select the appropriate option to optimize for from the first dropdown menu. Clicks is selected by default. The other option available is Opens. This option sets what criteria the platform should use to determine the winner. Opens are used for testing subject lines.

Set the amount of time the automated test should run on the sample audience before determining a winner and deploying it to the remainder of the audience.

Set the distribution the test should use for the sample audience this is the percentage of people the test will be performed prior to selecting a winner. The largest sample audience available is 50% of the greater audience.

Validating Automated A/B/n Tests for Statistical Significance

This is a deeply complicated topic, but validating your results doesn't have to be.

In the ZMP, we start with a null hypothesis. This means we start with the assumption that the results from A and B are not different and that the observed differences are due to randomness. This is what we hope to reject. If we can't reject it, ZMP will default to A as the winner.

ZMP has a set significance level of .05. This is an industry-standard level of risk we’re willing to take for detecting a false positive. In other words, 5% of the time you’ll detect a winner that is actually due to randomness and/or natural variation.

From here, there are many calculators online that can help you determine whether or not your test results are statistically significant.

Here is a basic excel spreadsheet with the formulas already in place. Simply input your delivered and unique open rate for A and B to see a simple Yes or No output.

ab_test_validator.xlsx