Good afternoon, I would like to develop a workflow, in which it will execute the quantity of records in pieces, that is, that the fuzzy match execute 35,000 records at a time and store the results in a temporary file and after all the records are executed it would unify the data.
The intent is to run the analysis on more than 1 million records but run partitioned for server optimization.
Any idea to carry out this analysis?