This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I am new to Alteryx. I am trying to find a tool to perform this action. What I am trying to achieve is given below:
I am reading a text file. In that text file, 1st row has just 2 words. 1st word indicates the variable identifier and the 2nd word indicates the value. Headers and data start appearing from Next row onwards.
[Vehicle Type]: [Camry]
VIN Ext Color Int Color Model Year
123 Blue Black 2018
In above example, I am able to read data from line #2 onwards as per my expectations however I also want to read 1st row as a field and in my output, I would like to put that as an additional column. So, what I would like to get an output is as below:
Welcome to Alteryx! I hope this example showcases capabilities that will solve your question. In the future, I recommend you provide a sample input file whenever possible so that we can make sure that any suggestion provided really works for your situation.
That being said, I made an example input file and used a few tools that will be useful in this scenario. I started with two Sample tools to isolate the "first row" and "not the first row" to handle those data layouts differently. Take a look at my solution and let me know if you have any questions.
Thank you for your response. I checked the workflow you shared in your post. It was helpful. I came to know about the usage of SELECT tool however it may not be the best fit for my solution. This is due to me not giving complete requirements.
So, the solution you proposed worked for me for a single file however what I am trying to do is, digest multiple files from a single folder where there would be more than 1 file (1 file per vehicle model like Camry, Corolla, X1, X2, Mustang, CRV etc) and generate report.
I am using input data tool to get all the finenames from the folder and followed by Dynamic input tool to read data from the files one by one. While configuring Dynamic Input Tool, I have given "Start Data Import on Line" = 2 because 1st like is reserved for model type. But I guess, due to this value, my Dynamic Input Tool is not able to read the 1st line.
I'm not positive I understand fully (@CharlieS is right, some sample data and an example of how the result should look helps) but here's a go:
This assumes your data all gets stacked from the different files in the same format you provided in your first post. The key is the semicolon in your vehicle type rows, so if that varies then this will likely break. Take a look, see if it gives you your expected results.
Thank you for your response. Yes understanding is correct. I am getting input via test files and they are stacked in a folder. I would like to read these files and aggregate data in one output table in alteryx. Also, in the output table, I expect to have additional column extracted from the 1st row of each file.
I am using Dynamic Input tool however it is not helping me to read everything using one tool. I get either/or results. I mean, either I get 1st row from each file and miss remaining data from Dynamic Input tool result or I get everything but 1st row from each file using dynamic input tool.
Is there any tool exist that allows me to read 1st row from each file as one result and remaining records as second result set? I tried using Sample and Select Records tools however they produce one output result set only.
See the sample input file for reference. I would have about 10 similar files in a directory and my workflow would be reading them all and produce output in a single table aggregating records from all files with additional column showing value from 1st row of each file.
A good idea might be to not do anything with the data while it comes in. Read it as un-delimited with no field names, and then parse it once it's in your workflow. See attached for an example, using some copies of the text file you attached.
I tried to implement both approaches and both of them worked however I am not sure from performance and efficiency per se which approach is better!
I guess, reading input file just once is better from performance and efficiency point of view however processing further down the line in workflow with multiple RegEx may hit performance later. On the other approach, reading input file 2 times hit performance and efficiency initially however down the line flow becomes more simpler and straightforward. One may select desired approach by considering the specific requirements.
Thank you everyone for responding to my post to help me progressing. I will go ahead and accept the solution.