Hello,
I need assistance on how to best structure a workflow that needs to capture value stored in a matrix.
Essentially, based on two different performance metrics, I need a "score" returned. An example of the matrix is below.
Metric 1
10-15% | 15-20% | 20-25% | 25-30% | |
5-10% | 1 | 1.5 | 2 | 3 |
10-15% | 1.5 | 2 | 3 | 4 |
15-20% | 2 | 3 | 4 | 5 |
20-5% | 3 | 4 | 5 | 6 |
So someone who scored a 10.3% on metric one and a 15.7% on metric two would get a score of 2.
I know I could write a complicated if/then statement, but I have 6 unique matrices depending on what department a person works at, so it would be pretty exhausting to write out.
Also, I'm not sure if I could use a generate rows as the data is continuous, but maybe that is an option?
I can't share the actual data, but the score each record is a single person and the two scores are two columns.
Input:
Name | Metric 1 | Metric 2 |
Jane Doe | 10.7% | 15.3% |
Desired Output:
Name | Metric 1 | Metric 2 | Score |
Jane Doe | 10.7% | 15.3% | 2 |
Thank you!
Solved! Go to Solution.
Hi @atperry30
Assuming you don't have hundreds of thousands of records.
This workflow should work well for you:
I create the combinations of the matrix with upper and lower bands in the top.
Append all the people to the matrix, then filter out for the correct result.
As mentioned though, the append fields tool will not be the best if you do have large data volumes. That being said, Alteryx will churn through the data to make it work if you have the PC resources.
Let me know if you need a more efficient workflow and I'll see if I can come up with one.
Joe,
This is fantastic, thank you!
Unfortunately yes, in terms of the number of records it is about 1.1 million going through the whole workflow (records are unique to per person, per month). However we only need to apply the score to the previous 3 months worth of data, so that's approximately 150,000 records. So perhaps we could offshoot the data and only apply the methodology to those records.
But if you do have a more efficient way, that could be helpful! It's already a very long workflow and we're always trying to shave minutes of the time.
Best,
Alex
Hi Alex,
Same logic but applied in a slightly different order. May mean it runs faster, as we transpose based on the second matrix value after already finding which one the first falls into (if that makes sense).
Makes the small sample data run in 0.3 seconds compared to the above being 0.4.
Those numbers shouldn't get as big then as it's only even going to be number of people per month times the number of metric values. And not people x metric1 x metric2
The other way would be to do some rounding, and then preparing the matrix file a lot more.
Which is a fair bit more work on the matrix file, but I assume that's much less volume. And then you only have a join between 150k records and however many the matrix gets to.
I may have to bow out with the above and see if anyone else can think of a more efficient flow.
Joe,
This also works well. We were able to narrow the data that needs to be processed enough that I think your first solution will be perfect, but if not, I'll try the second. I'm so grateful for such a quick reply!
Great stuff, always good to have multiple choices.
I think option 3 may be the most efficient on a large data set.
User | Count |
---|---|
18 | |
14 | |
13 | |
9 | |
8 |