Too Many Fields in Record 2 with .csv as input data
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
I have a .csv file that is getting the "Too Many Fields in Record 2" error and won't advance past the input tool. This particular file comes to me with the first row completely empty and the data starts on row 2. My work-around is to open the file and delete the first row and this allows it to run my workflow fine and get the expected data.
I don't want to open or touch the file, just bring it in the workflow, but I can't find any other solution to get past this error.
I've read some of the other posts on this topic and tried:
starting data import on line 2
Changing delimiters to \0
adding to the field length
treating errors as warnings to at least read it and troubleshoot
Still no luck getting to my data. Does anyone have any experience with this? The work around is so easy yet i can't convert that manual intervention to an Alteryx process that does the same thing and resolves the problem.
Solved! Go to Solution.
- Labels:
- Error Message
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
hi @Rob48
The standard way to deal with this is read the file as CSV with line feed delimiter \n and then parse it in the workflow itself. If you can provide a sample I'm sure someone will come up with an answer fairly quickly
Dan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
ok here are the files for someone to work their magic. I've tried many of the suggestions here and nothing has worked.
"CSV file top row blank" is the untouched version that I'm having the problems with ("too many fields in record #1")
"CSV file top row deleted" works fine with the workflow i've been using which uses a cleanse, sample tool to eliminate the first few rows, and then a dynamic rename to take the field names from the first row of data.
i should also note during testing:
1. i open and closed the top row blank file; no change, same error message
2. i renamed the top row blank file; no change, same error message
3. i went into the top row blank file and changed some of the data (to make the data more anonymous) and I DIDN'T get the error message, it worked fine. ???
screenshot below of the configuration parameters that are not working.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @Rob50
The Import as CSV with Standard delimiters(comma, etc) only works correctly if the file is an actual valid CSV file with a single set of consistent columns with all the rows conforming to the schema of the first one. Your input file is actually a text file with a csv extension. It has at least two different column schemas, three if you include the blank row.
Here's a workflow that implements the technique I described in my previous post
The input tool is configured to use newline(\n) as the delimiter and Field Names in First row unchecked and a long field length
Once the file is in Alteryx, the multirow tool marks the various report sections, which correspond to the different schemas. Each section is then treated separately, with the data section looking like this
Note that I removed the "No Records Processed" message from the input file and added in some dummy data to illustrate the process. The workflow will work the same for your original file, with "No Records Processed" in the Location column and all the others blank
Dan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Dan -
I can't download your workflow with my version of Alteryx. Is it possible you could share it in an earlier version or display the tool configurations in a post?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
2018.3.7.57595
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hello,
i have the same issue, the result is an error "Too many fields in record 2", although with a diferent file, attached. Delimiter is semi-colon.
I need that first record and the 2nd one as header, so all the data in the file should be given in the output, so i can work out from there.
I have tried all the suggestions above and workflow but still the issue persists,
Thank you very much
