Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.
SOLVED

Best Practices - How to maintain your workflows in production

Emil_Kos
17 - Castor
17 - Castor

Hi,

 

I have a question regarding how you would structure your workload.

 

I work for one client, and we have workflows for different countries. For each country, we have 2 or 3 workflows. Each country uses data from different tables, the data is the same for all workflows, but we got various data transformations for each workflow.

 

We will not combine the workflows for each country as we often need to rerun one of the workflows.

 

The current setup is we are running the workflows one after another, but a big chunk of processing time goes into running the extracting the data.

 

Current setup:

 

1st workflow
2nd workflow
3rd workflow

 

In the development phase, I got a workflow 0 that extracts the data to yxdb files, and I would like to know if I shouldn't use that approach in production.

 

Development setup:

 

0 - Which extracts the data from the database and saves them as YXDB

1st workflow
2nd workflow
3rd workflow

 

I would love to hear others' opinions on this topic.

 

5 REPLIES 5
Felipe_Ribeir0
16 - Nebula

Hi @Emil_Kos 

 

I like this approach because it does not need your workflow 0

 

1)I make one first execution and save the DB data into a yxdb file.

2)Then for the second execution, by enabling/desabling the containers i can choose if i want to read the data from the DB again or if i want to load the data from the first execution

 

Felipe_Ribeir0_0-1673611002164.png

 

 

 

MarqueeCrew
20 - Arcturus
20 - Arcturus

@emil ,

 

Reading/Writing the data is often the biggest time challenge to the workflow.  If you are reading the data 1x only, then the time is what the time is.  I have seen many workflows that are split up take even longer to run than a single workflow because each workflow starts with reading and ends with writing.  If the processes are sequential you might not want to split the jobs.  An added benefit to the single workflow is that you don't fill up your disk with these caches of intermediate data.  

 

If you want to engineer a cache read, then maybe you would use a DIR tool to see if data is present as cache (yxdb) and send that to a dynamic input for reading.  If the yxdb version is not available then use the flat version.  This is a slight modification to the suggestion made by @Felipe_Ribeir0 .

 

Cheers,

 

Mark

Alteryx ACE & Top Community Contributor

Chaos reigns within. Repent, reflect and restart. Order shall return.
Please Subscribe to my youTube channel.
Emil_Kos
17 - Castor
17 - Castor

Hi @Felipe_Ribeir0.

 

Thank you for sharing your knowledge. 

 

I want to schedule everything on the server, and I would like to create a process which will be effective and easy to maintain. 

 

When I rerun the data very often, I don't need to look at the workflow or change anything. 

I didn't mention that I also have configuration files that can change during the day. So we have two types of data sources.

 

1. Databased - One refresh per day will be enough

2. Small excel configuration file

 

Sometimes if we see the data don't make sense, we need to visit the excel files and change some mapping tables. 

After that change, I can run the workflow from the server without making any changes, which wouldn't be possible in the proposed scenario.

 

 

SeanAdams
17 - Castor
17 - Castor

Hey Emil - happy Friday!

copying @atcodedog05  Kiran for a view here - and a similar challenge that @mceleavey  was working through.

My approach to stuff where you want to manage througput time; or manage change is to break things down into smaller chunks, and then you can run these in delta flows

For example - for each of your country files:
- Create a small alteryx canvas that just goes and gets the latest data that has changed since your last call - and brings it into a DB.  If you want to reload for just one country - you can have a table called "ETLRefreshRows" which contains the keys that you want to reload.         This way your loaders are taken care of; they run way ahead of time; and by the time you want to do the next-step you alreayd have the data.     You load this data into Raw tables with a date updated stamp, and preferrably a history.
- Then your enrichment layer - since you have a temporal stamp on your raw tables - you only need to enrich the rows that have changed since you last looked.    And given that you're running your data loaders in small bites - you can also run your enrichment layer in small bites.
- Final analytics:    Your final analytics is now almost fully in-database - because you've been loading hte data in small bites every few mins; you've also been enriching the data in small bites every few mins - and so when it comes time to do analytics you're only limitied by the speed of the DB to do aggregations or filters.

 

So - my rule is that if you need to run a complex piece of analysis from many different sources - and it needs to run in a particular time window every day - break it up into chunks; and shedule these chunks into micro-batches.

 

 

BTW - as a fun note - this is how Spark achieves near-real-time data flows - just make your batches very small and very frequent.    

BTW2 - since you're now living in micro-batches - it's simple to recover, simple to reload just a subset, and you have the benefit of much easier canvasses to maintain because each canvas is pretty light weight (get the data from here, do some simple clieanup and transformation; dump it there into a raw table).

Emil_Kos
17 - Castor
17 - Castor

Hi @SeanAdams,

Thank you for sharing. I appreciate your insights. I am unsure if I can read what was added or changed in the data source as I am using the salesforce connector, and I don't think some of the tables have the information about the last update...

 

I think I will go with the super simple set-up that I proposed in my first post, but I might tweak it a bit based on @MarqueeCrew response

 

0 - Which extracts the data from the database and saves them as YXDB

1st workflow
2nd workflow
3rd workflow

 

@MarqueeCrew, I wasn't aware I could reuse the cache on the server. I will be able to access that using the Engine temp file path, or is there a different method?

Emil_Kos_0-1673648112697.png

 

Labels