Hi all,
I am looping a workflow via an R block (with a for loop). The R block reads iteratively inputs from my PC (1 files per iteration), then the workflow processes the data, and finally there is a .csv output. Ideally, each iteration should read the inputs from my PC (R block) independently, but after each iteration the inputs get accumulated - how can I erase, let's say, the inputs from the first iteration once the second iteration begins.
I hope I have explained myself properly.
The workflow is similar to the one below
Cheers
Hi @omar_velor ,
are you saying you wish to read in a file, process it and output before running the next one, rather than read them all in and output all at once?
Are you writing to different files?
M.
Hello @mceleavey,
I have created/attached a very simple workflow that illustrates what I am trying to achieve.
My goal is to run the workflow for each input and obtain/save the corresponding output. Right now all inputs merge together, go through the workflow, and create a single output file. How can I load the first input, obtain the first output, and so on according to the number of inputs I have?
Hi @omar_velor ,
I don't understand why you're using R when the functionality is standard in Alteryx.
Regardless, I've built the functionality I think you're after.
The primary workflow uses the Directory tool to load in the full paths or all files that meet the requirements, in this case all "Input" files.
This fullpath field is then fed into the Control Parameter of the batch macro:
This overwrites the path in the input tool, so it will go an load in all files in sequence.
The formula tool then currently changes "input" to "output", and the output tool takes the output path from this field.
This should create the output of all files.
I've attached the workflow and the macro (which you will need to save in your macro folder.)
I hope this helps,
M.
Hi @mceleavey,
I am reading right now about the Dynamic Input and Directory tools - is this the standard solution you are referring to?
I can see why this is way more efficient, but can you help me to understand how to achieve my actual task:
The workflow I am actually running reads 7 different .csv files (each one with different headers, lengths, etc), processes them, and creates 1 .csv output.
I have 84 folders, each one with those 7 files - then I will ideally have 84 outputs.
What is the best approach you can suggest for this?
If your files are in different formats, then you'll need to build the process of standardising them into a single format.
To read in multiple files from multiple folders, ensure you have recognisable naming conventions, then point your directory tool to the parent folder of all files, and check the "Include Sub-Folders" option.
I can jump on a call for ten minutes if that would help?
DM me if that would be beneficial.
M.
User | Count |
---|---|
19 | |
15 | |
15 | |
8 | |
6 |