FAQs (Data)

“It’s not the answer that enlightens, but the question.” - Eugene Jonesco
With the aim to respond to your needs more quickly and appropriately, we have jotted down the most frequently asked questions about Zeta Data.
Question | Answer |
---|---|
Why is my Data Flow not generating files anymore? | This might happen if your selected query does not return a JSON, which is causing the data flow to fail. To resolve this issue, the Snowflake query needs to be fixed to ensure it returns the correct JSON format. |
We have had the following data flow alerts:
CODE
and
CODE
Can you please advise if this means data has not been exported by the data flow? At the moment we’re not sure what to do in response to this error? |
The error could be because of anything like a network failure or anything that might have disrupted the data flow run. However, since there is a retry logic built into the data flows, that made the run load successfully. |
My data flow was used to carry out a one-time historical load of events, handling an estimated 65 million records. Unfortunately, it failed during execution and didn't produce the expected file name, as shown in the provided error log. What might have led to this problem? | The reason is that the max file size (5368709120) was exceeded for unloading single file mode. |
Why does my file indicate a 'success' status yet fail to appear in the designated destination folder? | If the file isn't showing up in the destination, we can check on our end to see if the sink is active or inactive for you. |
Why is my file in a report being skipped by data flows? | If a file is skipped, it could be one of the following:
There could probably be some more scenarios missed in the list above. To check to see why the file was skipped, check these scenarios.
|
Why did my data flow run a day before the scheduled time? | By default, there are certain situations where a data flow will automatically perform a "catch-up" execution if it has missed the last scheduled run. We have recently added a feature that allows you to disable this catch-up execution. This option can be enabled when setting the schedule for the data flow by configuring an account feature flag. The flag is called "Strict Schedule Node" and can be utilized when the flow needs to fetch a file only at a specific time (instead of as soon as the file is dropped). |
Can we re-map email MD5? | Yes, but requires some pre-work to enable the field.
|
Do we support single and double quotes as text qualifiers? |
|
Are all properties represented? | Yes, implicit and existing people properties will be represented in the drop down list |
Can we change a data type from this mapping? | No. but the standard rules apply for imports, i.e. we will infer the data type of a new field or abide by existing data types where possible |
Can we map into contacts, e.g. “email” to “contact value” | Not at this time, but have added this to the feedback list above |
Can we map to objects? | Yes, from column to individual object property but not from object to object or object to property. Keep in mind that we’d update the whole object even if only one element is included, so there is potential to nullify data |
How should we handle cases where both | Currently, we only support mapping into |