This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Sorry, I should have been more clear. All other things being equal, does the AMP engine improve workflow runtime on fuzzy matches? Also, which of these will best improve runtime - CPU clock speed, CPU core numbers or RAM?
All other things equal, fuzzy matching with AMP might perform faster. Fuzzy Match is converted to AMP, so it has multi-threading capability and will likely outperform E1 as data size and hardware specs increase. AMP is about scaling - as data and hardware grow, AMP is more likely to outperform E1.
All three hardware changes might improve runtime and it really depends on the process you're trying to execute. For example, adding more memory doesn't matter much if you're typically working with smaller data sets. On the other hand, your workflows will typically run faster if you can keep all of the data in memory. A general statement could be add more cores if possible, but it may not matter much if you can't keep the data in memory. Consider how you typically use Alteryx.
I do alot of fuzzy and have seen some good improvements. I run on 64gb of ram and a 10 core i9 so there's lots of room for threading.
One thing that did catch me out was mixed ids in both columns after a merge which required me to apply an order to the outputs in the union tool prior to the fuzzy tool.
After that the process definitely ran faster. I was encouraged to shorten my keys to increase the scope of my match.
Am interested to hear how others are getting on. Some of my pipelines are running in a 10th of the time now their AMP'd.
There's alot of gotchas though if you run complex and lengthy apps and flows. Lots of little things that the regular engine forgives and can just skip over, will stop an AMP'd flow in its tracks. Worth the debug effort though.