This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
... although I'm not sure it will help: I looked at the code on GitHub and it appears that it it reads the input in chunks, but still does all the reads in one loop, adding to one large data frame (which would probably run out of memory).
So, an alternative would be to put your Python code into an iterative macro that processes some number of rows at a time. (e.g. send first N rows into your Python tool whose output goes to the iterative macro's final output; and skip first N rows which go directly to the iterative macro's Loop output).
I agree that I’ll probably still run out of memory, but will try it for fun. Iterative macro is a good idea that I hadn’t thought of. A bit of a workaround rather than a solution, though, given that I don’t feel like my dataset is larger than what an Alteryx tool should be able to handle. In fact, the dataset itself was generated in Alteryx, and represents about half the data I had there.