Alteryx Server Discussions

Find answers, ask questions, and share expertise about Alteryx Server.

Titlecase caching in Transpose on MongoDB scheduled runs

salbol1
8 - Asteroid

Hello -

Looking for some direction on this. We have many scheduled workflows feeding a Synapse db where outputs from the modules create csv files that are used to load the metadata and data associated to a table with dynamic changes accounted for. I know that there is a caching problem with the Transpose tool where it will hang onto titlecase with a death-grip, so if the field 'Month' is coming thru and our customer starts feeding us 'month', the dynamic nature of accommodating this change is nixed by Transpose on our outputs. The nature of Transpose caching is throwing off subsequent Joins, and other alignment functions/tools in the modules which are case dependent.

 

So my question is in a server environment with MongoDb in play, is there something we can wipe run over run in the Persistence or Engine path that will short circuit Transpose apparently reading in metadata in the workflow which is soon to change (our 'Month' to 'month' example)?

 

As a best practice remedy on go-forward builds, we have told our teams that the first step will be to put in a Dynamic Rename tool after the data input function so that all fields will be TitleCased from their inception.

 

Transpose.PNGMy request here is to deal with legacy workflows we have in place, which number in the hundreds.

I just don't know if this is just the nature of the bug that is associated with Transpose, and what the nature is of workflows running that "retain" some type of metadata from run to run as some type of efficiency measure baked into the server scheduling functionality for workflows.

 

 

 

 

2 REPLIES 2
PanPP
Alteryx Alumni (Retired)

Hi @salbol1 

 

If I understand correctly, you can wipe out the results of a workflow, schedule of a workflow, or uploaded files after x amount of days within the Persistence option of the Alteryx System Settings if this is what you are looking for.

 

Persistence.png

 

 

 

Hope this helps and answers your question regarding the persistence layer, if it does please like this post and if it helps resolve your problem, mark it as a solution. If you have any other questions, please let us know.

salbol1
8 - Asteroid

Well it certainly can't hurt to create a test version of one of the modules, and push these from our current 14 days to 0. If a case changed load after a run today comes thru as modified case tomorrow when we induce lowercased fields as the 'new' datasource, then we've got our answer. I guess the root of it is understanding with a scheduled module how the xml captures those characteristics in the persistence layer upon creation - and how the engine goes back to those workflow elements from the controller the next run.

Our norm on Scheduled workflows is the "Run From Disk" selection rather than 'Run From Scheduler" option to account for changes in the module not requiring an update in the Scheduler, just a save. Any opinions on how choosing those options may impact a read-in from the engine of what's about to run for the module?

 

 

salbol1_0-1670017954575.png