Too many fields in record#1
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
I am trying to read a pipe delimited file and I am receiving the error above.
The header row is 1,000 characters and contains 60 fields.
There is a total of 24,173 rows in the file.
The error message doesn't make much sense. Is there an upper limit on the number of fields in a delimited data file?
FWIW. Excel has no trouble reading and correctly parsing the data.
Solved! Go to Solution.
- Labels:
- Input
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
It seems that if I append a pipe (delimiter) character to the end of the header row, the file is correctly parsed. However, an empty field is appended to the result set.
This observation doesn't help much though. I still need to be able to read the files as they stand.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @nhunter!
If an explanation helps, I can explain the behavior. As for Excel, I'll put in my two cents.
If your data looks like:
Name|Address|Phone
Mark|Michigan|
Mark|Michigan
Mark|Michigan|555-1212|888-123-4567
Excel might parse this out with a blank field name and show NULL values appropriately. Excel would also parse out this row incorrectly:
Mark||Michigan|555-1212
It would move Michigan into the phone field because of the extra |.
When data is expected to conform to a consistent format, Alteryx prevents you from loading the data when the wrong count of fields is encountered.
If you post a file with the first 2 or 3 records only. I'd be happy (or one of many others helping here) to take a look and see what we recommend specifically for you. My first priority is to help you with the issue. Once that is done, an idea for future enhancements can be made to be more forgiving of delimited files and allow the data to be read with caution.
Cheers,
Mark
Chaos reigns within. Repent, reflect and restart. Order shall return.
Please Subscribe to my youTube channel.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Thanks, Mark.
Here's a small sample.
I had to rename the file to attach it here. The original file name is internalPSHoldings.20161230.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @nhunter,
Generally this error occurs because the field length on input is too short, causing truncation of the data. If you increase the Field Length in the input tool and read in the file with no delimiters (delimiter = \0), you should be able to read the entire file in. Then, you can use a Text to Columns tool to parse out on the pipe.
Senior Solutions Architect
Alteryx, Inc.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
For additional color, here's a great knowledge base article: Error reading “FILEPATH\MYFILE.txt”: Too many fields in record #
Senior Solutions Architect
Alteryx, Inc.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi, Sophia,
I appreciate your response but the KB article is not that helpful. Since I have more header fields than data fields, according to the article I should...
just get a warning like this:
'Warning: Input Data (1): Record #6: Not enough fields in record'
And the last record in the third field (sic--presumably he means the third field in the last record) will just be null.
But that's not what's happening. I would be fine with that. What is actually happening is that I get an error message and no data is loaded.
I tried loading the file without delimiters and then using the Text to Columns tool but then all the fields are named Field1, Field2, ... FieldN.
How would I recover the field names from the original header row?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
For your last question, use a Dynamic Rename and "Take Field Names from First Row"
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @nhunter,
Helpful as my friends @RodL & @SophiaF were, I have a full answer for you.
YUCK!
In order to bring in your data cleanly, I did the following:
- Counted the pipes to determine 60 fields of data.
- Imported the file as a single field
- Set Length of field to 1024 bytes
- Used \0 as a delimiter
- Trimmed the spaces off of the data (I think that this is the root issue)
- Parsed the data on the pipe (|) delimiter
- Removed extra fields
Wrote this essay.... Excel is more space forgiving...
I am sending you the workflow in version 11. I recommend upgrading to it if you haven't already. I am sending it in version 10 too.
Cheers,
Mark
Chaos reigns within. Repent, reflect and restart. Order shall return.
Please Subscribe to my youTube channel.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Thanks, Mark, much appreciated!
