Hello,
I am trying to delimit attached file with following fix width
0,1,9,17,18,26,27,35,36,44,45,53,64,72,85,87,88,98,99,112,113,123,125,128,178,203,238,240,241,244,264,289,292,293,294,295,296,301,304,317,318,322,325,326,327,329,334,335,338,339,347,357,365,371,377,385,387,393,395,406,420,428,439,453,461,472,486,494,497,499,501,503,514,528,536,547,561,569,580,594,602,604,615,629,637,648,662,670,681,695,703,705,716,730,738,749,763,771,782,796,804,806,817,831,839,850,864,872,883,897,905
output file should have 110 columns after getting delimited.
Can you please help me with workflow for this.
Thanks in advance
Vinod
Solved! Go to Solution.
I want to delimit attached file with shared positions (0,1,9,17,18,26,27,35,36,44,45,53,64,72,85,87,88,98,99,112,113,123,125,128,178,203,238,240,241,244,264,289,292,293,294,295,296,301,304,317,318,322,325,326,327,329,334,335,338,339,347,357,365,371,377,385,387,393,395,406,420,428,439,453,461,472,486,494,497,499,501,503,514,528,536,547,561,569,580,594,602,604,615,629,637,648,662,670,681,695,703,705,716,730,738,749,763,771,782,796,804,806,817,831,839,850,864,872,883,897,905)
1st place of delimit will be on length 1
2nd place of delimit will be on length 9
Same for rest positions.
Regards,
Vinod
When inputting the file as a text file, you can select the option to read it as a fixed width file. From there it will let you define each field's length, or import the settings which might be preferable for you given the volume.
Is this close? I think there's an extra trailing or leading zero to give it the extra column. Basically you take the splitter as a text input and turn that into rows. Then you use a multi-row formula to subset based upon the starting point (from the split) and the length of the split.
Hi,
Just looking at your error I can tell you that you are using data which is in the first row - but don't have "first row contains data" checked on the data input. That means that Alteryx is reading in your first line's huge character list as a row header. so make sure "first row contains data" is checked.
I did have a minor issue with adjusting for column headers so I fixed that in the attached version.