I want to query a large enough number of records that will exceed our 50gig working memory capacity, process it (no aggregation) and upload to a sql server table.
I'm trying to mimic the chunksize functionality of pandas where chunksize number of records are read from a file and processed and then dropped from memory as the iteration moves on to the next chunk.
The throttle tool does not accomplish this.
I don't see that batch macros accomplish this since they union all records on the other side, which I want to avoid.
Any suggestions on this?