Skip to main content
Skip table of contents

A/B/n Testing

Besides helping identify errors and subsequently remove them, testing also helps compare actual and expected results in order to improve quality. Ensuring that your expectations come to life, the Zeta Marketing Platform enables marketers with the ability to test subject lines and/or content both automatically and manually with up to ten variants.

Enabling A/B/n Testing

To enable A/B/n testing for your Broadcast campaign, click on + Add an A/B test option under the Content & Audience tab.

While A/B/n testing is enabled, it will run for every instance of the campaign, including any recurring type of campaign. This applies to both manual and automated tests.

The A/B Testing option will not be available for enabling if the Prime Time is enabled.

Managing Variations

Adding a Variation

To add an additional variation to the default A and B variations, click the + icon to the right of the A and B.

Removing a Variation

To remove a variation, hover your mouse over the variant that needs to be removed and click the X icon that appears at the top right corner of the variant icon.

Editing a Variation

To edit a variation, click the variant icon that needs to be edited. All content and header information can be modified as needed for the selected variation.

Disabling A/B/n Testing

To disable A/B/n testing, hover your mouse over the variant that needs to be removed and click the X icon that appears at the top right corner of the variant icon. Once all but one variant are removed, A/B/n testing will be disabled.

Supported Channel

Manual Tests may be executed on any type of channel while Automated Tests may only be executed on native channels:

  • Email - By default, optimization may be based on Clicks or Opens. If the account has been set up to track conversions and revenue, then these two may also be used as the basis in A/B/n testing.

  • SMS/MMS - optimization may be based on Clicks only.

  • Push Notifications - optimization may be based on Clicks or Opens.

Configuring A/B/n Testing

Click on Settings to the right of the A/B/n variants. A menu will slide in from the right side of the window for further configuration.

There are two primary types of tests available: manual and automated. These options are available for all launch types. Keep in mind that while A/B/n testing is enabled, it will run for every instance of the campaign, including any recurring type of campaign. This applies to both manual and automated tests.

Manual Test

Manual tests are typically used to run a longer-term test across multiple campaigns, instances of a recurring campaign, and/or multiple audiences. This is useful when a user wants to maintain control over what the winner ultimately ends up being.

The platform will not select a winner using a manual test. The winner can be chosen by the user by duplicating the campaign used for testing and removing the variant(s) deemed by a non-winner.

  • Select the appropriate option to optimize from the first dropdown menu. Clicks are selected by default and Opens has no effect on the test.

  • Leave the Run test on sample audience option deselected. This is the setting that determines whether the test remains manual or automated.

  • Set the appropriate distribution for the test. The distribution should add up to 100%.

Automated Test

Automated tests are used to optimize the performance of a subject line or content for any given instance of a campaign. For file-based, API-based, or other recurring broadcasts, this means the test will run on a sample audience and choose a winner every time the campaign is deployed. Typically in these scenarios, a manual test is preferred instead of an automated test.

For ad-hoc broadcast campaigns, automated tests can help decide the best subject line or content at a particular instance for which a winner can be automatically chosen from the remainder of that audience. The winner is chosen using null-hypothesis statistical significance.

  • Select the appropriate option to optimize from the first dropdown menu. Clicks are selected by default and Opens are used for testing subject lines. The Opens option also sets what criteria the platform should use to determine the winner.

  • Set the amount of time the automated test should run on the sample audience before determining a winner and deploying it to the remainder of the audience.

  • Set the distribution as per how the test should use the sample audience. This is the percentage of people, the test will be performed prior to selecting a winner. The largest sample audience available is 50% of the greater audience.

The default Optimization time on the account level cannot be changed since it is standard across all accounts.

Validating Automated A/B/n Tests for Statistical Significance

This is a deeply complicated topic, but validating your results doesn't have to be.

In the ZMP, we start with a null hypothesis. This means we start with the assumption that the results from A and B are not different and that the observed differences are due to randomness. This is what we hope to reject. If we can't reject it, ZMP will default to A as the winner.

ZMP has a set significance level of .05. This is an industry-standard level of risk we’re willing to take for detecting a false positive. In other words, 5% of the time, you’ll detect a winner that is actually due to randomness and/or natural variation.

From here, there are many calculators online that can help you determine whether or not your test results are statistically significant.

Here is a basic excel spreadsheet with the formulas already in place. Simply input your delivered and unique open rate for A and B to see a simple Yes or No output:

ab_test_validator.xlsx

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.