Skip to main content
Skip table of contents

Node-Level Insights

image-20240513-134109.png

Using intelligence and historical data, you can get more context with Node-Level Insights in Experience Builder. This feature provides both historical and forecasted performance data while you’re building your experience, as well as forecasted and comparative performance once your experience is active.

  • As you build your experience, this feature helps you select more common events and gives you more confidence on what to expect.

  • Once the experience is active, you can monitor performance based on previous forecasts and previous actual performance, while continuing to get forecasted data for the next set of days.

Draft State

While building your experience, you’ll see more data on your nodes in the Insights module to give you insight into how they have performed and are expected to perform.

Historical Performance

For trigger and event nodes, you will see counts to tell you how often these events have been fired in your account recently. Trigger nodes will be measured using the total times that event has been raised. Event nodes will provide the unique people who have done the event.

image-20240508-165252.png

By default, you will see counts for the trigger or event node without filters applied. If you apply a property filter, the counts won’t be visible until you save the node for the first time.

image-20240508-165516.png

Exact Count Notifications

To return data faster, trigger nodes with property filters applied and all event nodes provide estimated counts for historical performance by default. If you want to see the exact count, click on the refresh button to start the query.

image-20240508-165556.png

Click on Get notified when completed to get an email when your exact count calculation is done. If you’ve already requested notification for yourself, you can click on View/Add notification recipients below the counts to input additional email addresses separated by a comma. Once saved, all users listed will get a notification when the calculation is finished.

image-20240508-165615.png
image-20240508-165714.png

Forecasted Performance

For the trigger, event, action, and non-audience split nodes in the draft state, you will see counts to tell you how often these events are expected to be fired in the near future (1, 7, 30, or 60 days). Forecasts in the draft state will provide the counts using unique people. Email and SMS/MMS campaigns will provide forecasts using the metrics specific to the selected channel.

image-20240508-173049.png

Campaign forecasts can be found in the side panel and in the Campaign Builder. In the draft state, these metrics correlate across general campaign performance and don’t change based on content inputs. The only supported channels for forecasting are email and SMS/MMS.

image-20240508-173543.png
image-20240508-173218.png

The chart below lists what to expect by node type and exceptions for both historical and forecasted performance in the draft state.

Node Type

Sample Nodes

Data in Draft State

Definition

Trigger Node

  • Account Events

  • Campaign Events

  • Segment Entry/Exit Events

  • Behaviors

  • Historical Performance

  • Forecasted Performance

  • How many times an event with specified criteria was raised in the previous 1, 7, 14, 30, 60, 90 days

  • How many people are expected to trigger this event in the next 1, 7, 30, 60 days

Does not apply to Use an Audience or High Priority Messaging

Event Node

  • Account Events

  • Campaign Events

  • Segment Entry/Exit Events

  • Behaviors

  • Historical Performance

  • Forecasted Performance

  • How many times an event criteria with specified criteria in this sequence was raised in the previous 1, 7, 14, 30, 60, 90 days

  • How many people are expected to trigger this event in this sequence in the next 1, 7, 30, 60 days

Action Node (Campaigns)

  • Email

  • SMS/MMS

  • Forecasted Performance

  • Predicted channel-specific performance for this campaign in this sequence in the next 1, 7, 30, 60 days

This does not apply to push, webhook, and third-party channel campaigns

Action Node (Non-Campaigns)

  • Add/Update data on profile

  • Sync to List

  • Sync to Programmatic

  • Sync to Facebook

  • Sync to Google Ads

  • Sync to Yahoo DSP

  • Forecasted Performance

  • How many people are expected to trigger this action in this sequence in the next 1, 7, 30, 60 days

Split Node

  • Split by Audience

  • Forecasted Performance

  • How many people are expected to trigger this action in this sequence in the next 1, 7, 30, 60 days

This does not apply to non-audience splits

Active State

Once you activate your experience, you’ll see three possible data points in the Insights module to help you measure your performance contextually and continue forecasting performance over the next set of days.

Comparative Performance

For all nodes in experiences activated for at least 2 days, you will see percentages to tell you how the node is performing against itself compared to the previous set of days. For example, if a node has been active for 90 days, it will compare the last 30 days to the previous 30 days and show the percentage change between them. This will help you identify trends and determine if you need to make changes to the node criteria.

image-20240508-180223.png

Forecasted Performance

For trigger, event, action, and non-audience split nodes in experiences activated for at least 5 days, you will see counts to tell you how the node is expected to perform in the next 1, 7, 30, or 60 days. Two key differences from the draft state are:

  • In the active state, the forecast is looking back at historical data for this node specifically and will continue to learn while the experience is active and adjust based on actual performance

  • The forecasted counts in the active state are for the total times that node is triggered, not unique people

Because some nodes won’t be fired within the first 5 days of an experience being activated, such as a node after a 7-day delay, or aren’t fired enough, there will be a label of There is not enough data. This will switch to a count as soon as it has enough data to provide a meaningful forecast.

image-20240508-181925.png

For more info on Confidence Scores, visit Zeta Forecasting.

Performance vs Forecast

For nodes with forecasting in experiences activated for at least 5 days, you will also see a percentage to compare the forecast to the actual performance of the node. Like forecasting, you may see some nodes with the label There is not enough data; this will update as soon as the forecasted performance is provided.

image-20240508-184607.png

The chart below lists what to expect by node type and exceptions for comparative and forecasted performance in the active state.

Node Type

Sample Nodes

Data in Active State

Definition

Trigger Node

  • Account Events

  • Campaign Events

  • Segment Entry/Exit Events

  • Behaviors

  • Use an Audience

  • High Priority Messaging

  • Comparative Performance

  • Forecasted Performance

  • Performance vs Forecast

  • How is this node performing against itself compared to a previous time period in the last 1, 7, 14, 30, 60 days?

    • If the experience was last activated less than 60 days ago, the maximum time period is [active days] divided by 2

  • How many times this node with specified criteria is expected to trigger in the next 1, 7, 30, 60 days

  • How did this node perform against its forecast in the last 1, 7, 30, 60 days?

Event Node

  • Account Events

  • Campaign Events

  • Segment Entry/Exit Events

  • Behaviors

  • Comparative Performance

  • Forecasted Performance

  • Performance vs Forecast

  • How is this node performing against itself compared to a previous time period in the last 1, 7, 14, 30, 60 days?

    • If the experience was last activated less than 60 days ago, the maximum time period is [active days] divided by 2

  • How many times this node with specified criteria is expected to trigger in this sequence the next 1, 7, 30, 60 days

  • How did this node perform against its forecast in the last 1, 7, 30, 60 days?

Action Node (Campaigns)

  • Email

  • SMS/MMS

  • Comparative Performance

  • Forecasted Performance

  • Performance vs Forecast

  • How is this campaign performing against itself compared to a previous time period in the last 1, 7, 14, 30, 60 days?

    • If the experience was last activated less than 60 days ago, the maximum time period is [active days] divided by 2

    • Email/SMS: Channel-specific metrics only

    • Other Channels: Number of times triggered

  • Predicted channel-specific performance for this campaign in this sequence in the next 1, 7, 30, 60 days

  • How did this node perform against its forecast in the last 1, 7, 30, 60 days?

    • Email/SMS: Channel-specific metrics only

Channel-specific metrics do not apply to push, webhook, and third-party channel campaigns

Action Node (Non-Campaigns)

  • Add/Update data on profile

  • Sync to List

  • Sync to Programmatic

  • Sync to Facebook

  • Sync to Google Ads

  • Sync to Yahoo DSP

  • Comparative Performance

  • Forecasted Performance

  • Performance vs Forecast

  • How is this node performing against itself compared to a previous time period in the last 1, 7, 14, 30, 60 days?

    • If the experience was last activated less than 60 days ago, the maximum time period is [active days] divided by 2

  • How many times this node with specified criteria is expected to trigger in this sequence the next 1, 7, 30, 60 days

  • How did this node perform against its forecast in the last 1, 7, 30, 60 days?

Split Node

  • Split by Audience

  • Comparative Performance

  • Forecasted Performance

  • Performance vs Forecast

  • How is this node performing against itself compared to a previous time period in the last 1, 7, 14, 30, 60 days?

    • If the experience was last activated less than 60 days ago, the maximum time period is [active days] divided by 2

  • How many times this node with specified criteria is expected to trigger in this sequence the next 1, 7, 30, 60 days

  • How did this node perform against its forecast in the last 1, 7, 30, 60 days?

Forecasting does not apply to non-audience splits

Delay Node

  • Delay for a set amount of time

  • Delay based on a previous event in experience

  • Delay until a specific time

  • Delay via Throttle

  • Comparative Performance

  • How is this node performing against itself compared to a previous time period in the last 1, 7, 14, 30, 60 days?

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.