This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I have a .csv file that is getting the "Too Many Fields in Record 2" error and won't advance past the input tool. This particular file comes to me with the first row completely empty and the data starts on row 2. My work-around is to open the file and delete the first row and this allows it to run my workflow fine and get the expected data.
I don't want to open or touch the file, just bring it in the workflow, but I can't find any other solution to get past this error.
I've read some of the other posts on this topic and tried:
starting data import on line 2
Changing delimiters to \0
adding to the field length
treating errors as warnings to at least read it and troubleshoot
Still no luck getting to my data. Does anyone have any experience with this? The work around is so easy yet i can't convert that manual intervention to an Alteryx process that does the same thing and resolves the problem.
The standard way to deal with this is read the file as CSV with line feed delimiter \n and then parse it in the workflow itself. If you can provide a sample I'm sure someone will come up with an answer fairly quickly
ok here are the files for someone to work their magic. I've tried many of the suggestions here and nothing has worked.
"CSV file top row blank" is the untouched version that I'm having the problems with ("too many fields in record #1")
"CSV file top row deleted" works fine with the workflow i've been using which uses a cleanse, sample tool to eliminate the first few rows, and then a dynamic rename to take the field names from the first row of data.
i should also note during testing:
1. i open and closed the top row blank file; no change, same error message
2. i renamed the top row blank file; no change, same error message
3. i went into the top row blank file and changed some of the data (to make the data more anonymous) and I DIDN'T get the error message, it worked fine. ???
screenshot below of the configuration parameters that are not working.
The Import as CSV with Standard delimiters(comma, etc) only works correctly if the file is an actual valid CSV file with a single set of consistent columns with all the rows conforming to the schema of the first one. Your input file is actually a text file with a csv extension. It has at least two different column schemas, three if you include the blank row.
Here's a workflow that implements the technique I described in my previous post
The input tool is configured to use newline(\n) as the delimiter and Field Names in First row unchecked and a long field length
Once the file is in Alteryx, the multirow tool marks the various report sections, which correspond to the different schemas. Each section is then treated separately, with the data section looking like this
Note that I removed the "No Records Processed" message from the input file and added in some dummy data to illustrate the process. The workflow will work the same for your original file, with "No Records Processed" in the Location column and all the others blank