Hi,
The position of the data in the data source is not static, that is, it is starting from a random row number.
For eg.: the starting point of the data can be from row number 5 or 10 or sometimes 4.
Is there a way wherein Alteryx always starts digesting the data from respective row numbers.
The good part here is that the column names are always the same.
Solved! Go to Solution.
Hi @sanketkatoch05 ,
I think an approach is to load all data starting from row number 1 and remove the header or empty rows. Of course you need a criterion to identify start of data (e.g. first row with numeric value in a specific column). If you provide a sample, help could be more specific.
Best,
Roland
Hi, due to its confidentiality, I cannot share the exact data. Hence I have created a dummy of it, attached doc.
As you can see, the main data starts from row number 8, and the above columns are just insights into the data (these are random).
Depending on the data, these insights (count) increase or decrease, that is why starting row number of the main data is not static.
Hi @sanketkatoch05 ,
you can read all data, add RecordIds, find the first row with data in column 1 (or e.g. the first "NOT NULL" row in column 3 or 4), and select all rows starting with this one.
You can use the last row before the first data row as column header (as I do in te sample workflow).
Let me know it it works for you.
Best,
Roland
I came up with something similar to Roland but uses a few less tools. Instead of finding the first row of data, just find your header row since you said the headers will always be the same. Append that header row back to your main stream of data and filter out any rows with record ID's less than that number. Finally use the rename tool to make the first row of your data the header row. Good luck!
Hi @trevorwightman, can you share the workflow, so that I can test with the actual data?
@sanketkatoch05 absolutely, here is the workflow I used. You will just need to reference your file for the input tool. Let me know if it works!
Thank you so much @RolandSchubert & @trevorwightman for helping out with the prompt solutions.
User | Count |
---|---|
18 | |
14 | |
13 | |
9 | |
8 |