Hi everyone,
I’m creating a workflow in Alteryx that will call a Macro on a scheduled basis. The Macro receives the following inputs via a Text Input tool:
Source Oracle table name
Filter / WHERE clause (if applicable)
Destination database/schema name in Snowflake
Normally, the Macro processing is straightforward: I use Input Data (Oracle) and Output Data (Snowflake) to establish connections, and a few Control Parameters allow the Macro to sequentially move a set of tables from Oracle to Snowflake.
However, this time I’m encountering some CLOB and BLOB fields that require additional processing, and I need some guidance.
I have two main questions:
Dynamic routing based on data type
a) How can I analyze or parse an incoming table dynamically to identify the data types of each column?
b) Once the data types are identified, how can I route them to different pipelines for processing? For example:
Text, Date, Number → Default Snowflake datatypes
BLOB → Use the BLOB Convert tool
Handling CLOBs
The sample table I tested on doesn’t have large CLOB values, but the workflow still times out at this step. I assume some additional processing is needed to move the CLOB. How should I handle this efficiently in Alteryx?
I’m relatively new to Alteryx and have only used a limited set of tools, so any tips, best practices, or example workflows for handling CLOBs and BLOBs in a scheduled Macro would be greatly appreciated.
Thank you in advance for your guidance!
@SSaib Interesting question. Lots going on here. It's been a while since I had to deal with CLOB's so I'm not going to be of much help there, but regarding your Field Types and Dynamic Handling, I would recommend you take a look at the [Field Info] Tool. This will tell you the data types of your fields in table form. Then you can use the [Filter] Tool along with [Control Container]'s for example, to drive logic based on the fields.
See if that gets you started, and if you have more specific questions from there, post a reply back here. -Jay
Hi Jay,
Thanks for your feedback and suggestions.
As recommended, I tried using the Field Info tool followed by a Filter to separate fields by datatype. Since this workflow is driven by a macro that processes different source tables dynamically, this approach seemed like a logical starting point. However, I’m still not entirely clear on how a Control Container would help in this case, as the Filter already segregates the data types. Here's what the current flow looks like:
Areas where I’m stuck
Example source tables
The Oracle source tables passed into the macro could look like any combination of the following:
Each table is extracted via the macro and loaded into its corresponding table in Snowflake.
This feels like a common use case, so I may be overthinking or missing a simpler design pattern. I’d really appreciate any guidance or best practices for handling mixed datatypes in a macro-driven workflow.
Thanks again for your help.