Hi all - the file I'm trying to input is a fixed-width .txt file. I have tried loading this in as a CSV that's pipe-delimited, and a flat-file, as well as a CSV without delimiters. Every time the workflow will stall when the input node is around 45 million records when I know from a previous row count that the file is 216 million - the workflow even says it is only 21% loaded. Eventually, the workflow times out and says it is complete without providing any errors or warnings. I know it's not the workflow because this has worked on the same type of file with more rows - with the same schema. Does anyone know why this may be happening?
Is it possible that the file itself is corrupted? Can you try to get a new export of that CSV and see if it happens again?