Hi..
I have a lot of the "True" Vlookup use cases in my excel to be recreated in Alteryx and so far I haven't had a good solution without writing a long nesting IF stagement. Sample case as below to calculate bonus:
Bonus % Table | |
Sales | Incentive % |
$ - | 0% |
$ 20,001 | 5% |
$ 40,001 | 8% |
$ 50,001 | 10% |
And I have to get the bonus rate with each sales' sales number:
Emp | Sales | Bonus Rate |
A | $ 35,961 | 5% |
B | $ 14,473 | 0% |
C | $ 42,656 | 8% |
D | $ 79,200 | 10% |
With excel it's very easily done with either vlookup or xlookup -- xlookup can even let you pick -1 or 1 so the it's even more versatile for a lot of use cases (I do have one use case to match with the next "larger" number/range).
Since the lookup value is numeric not string, unfortunately somehow I didn't seem to be able to find a good solution to make it work (find/ replace, join, union..). Also since my real application is way more complicated and both the "lookup_value" and "return_value" change quite often. The long IF statement like "If sales < 20000, then 0%, if sales < 40000, then.." really isn't a good solution to me.
Anyone can give me some pointers?
Thanks a million!
Solved! Go to Solution.
Hi @goutdelete
Instead of using a formula logic, you can first append the bonus table to the list of employee sales. Then filter down to where the bonus sales tier is less than the employee's sales, and finally use a sample tool to grab the last record for each employee.
Attached is a workflow I mocked up based on your screenshots. Let me know if this works for you.
Cheers!
Phil
Hi @goutdelete ,
I have tried my hand at it, see if this is what you are looking for.
cheers!
BR/Jonny
Thank you very much both @Maskell_Rascal and @Jonny. I tried them out and both work for me!
I only have one other question about the performance. Since we are appending here, basically the data set will grow exponentially (xN times if I have N buckets) before the filter step. Would this become slow when I cope with large data set? (I have another use case for this similar "approx vlookup" in another model that I need to build. and I probably have somewhere between 250~500K rows each time I run it)
Thanks!
Hey @goutdelete
There will be some form of slow down when you scale up to a larger dataset. To test this workflow, I generated 500,000 rows and assigned a random sales field between 0 and 60,000. When applied to the workflow, it runs in about 5 seconds. I also turned on performance profiling to see where the bulk of the time was being spent processing the data. The append tool is only about 11% of the total time spent.
Cheers!
Phil