This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
on 10-13-201501:57 PM - edited on 07-27-202111:46 PM by APIUserOpsDM
When importing a delimited file, have you ever run across an error saying ‘Too many fields in record…’ and it’s something like record number 1478254 and you think to yourself, “how the heck am I going to find that record and correct it?”
Well, let us help you with that.
This message is essentially telling us that some particular row of data, when parsed, has more fields that the first row of data (usually caused by an extra delimiter in that row).
The first step in identifying the row is to change the delimiter in the Input Data configuration window. Change the delimiter from the current delimiter to \0 (that is a backslash zero, which stands for no delimiter). This will read in your file but instead of parsing, it’ll just read in the data as is.
Now just add a RecordID tool and Filter tool so we can filter on RecordID 2 (or 1478254) to find the row that is holding us up.
Now that you’ve identified the row that is causing issues, you could go back to your original data, correct it and try importing it again.
If you were looking closely at the example data, you may have noticed that record 4 also had a different number of fields as row 1. After correcting row 2 and importing , again, we would get an error for row 4. Just follow the same procedure to correct additional errors. This could probably be done through an automated/iterative process, but that will have to wait for another time.
(attached is an example workflow created in Alteryx 10.0)