What are some best practices for incrementally processing a very large dataset? By that, I mean a workflow that queries a subset of a very large dataset, manipulates it, and either delete/appends on the original source or saves to a new one.
I have been experimenting with batch macros that have control parameters and no input/output. They run fine on a single batch but seem to run indefinitely without doing anything when feeding in a large number of batches. Any ideas why this wouldn't work? My only theory is that it is trying to do them in parallel and that is causing problems. Is there a better approach?