I'm creating a iterative macro and have 2 options with input data:
1) I could make 6 macro inputs, or
2) I could make 1 macro input (iterative), and have the other 5 inputs be regular .yxdb input files inside the macro itself.
Which would be faster? I could answer my own question by doing both, I suppose, but am mostly curious if in doing #2, if those files are essentially "reloaded" within the macro every iteration. Any idea if that is true?
I assume making them macro inputs causes the data in those files to only have to be pulled in once, but I'm not 100% sure if that is correct.
Curious if anyone out there can clarify.
@cwkoops you are correct in that the data files would be functionally reloaded with each pass. But, there is likely an opportunity upstream in the workflow to tighten up some data blending steps before iterating through records. This will ensure that the iterative macro process is finite and aligned with the number of records from the original inputs. In instances where you have decrementing logic (e.g. inventory allocations, expenditures to cashflow, etc.), your macro runs the risk of incurring significant performance issues if there are dependent formula calculations that attempt to deplete values in a specific field but can't due to some underlying reason.
Hope that helps you out!