Showing results for 
Search instead for 
Did you mean: 

Alteryx designer Discussions

Find answers, ask questions, and share expertise about Alteryx Designer.

GPU usage in Alteryx algorithms?

Alteryx Partner


Does Alteryx ML tools benefit from computers GPU in terms of parallel algorithm execution?


I have seen some R examples and in Revolution Analytics blog there is mention of such a capability...

But couldn't be sure of...


GPU Analytics thru R:
Revolution analytics:

Why this capabilirt is expected is detailed in here:


Best Regards




@Atabarezz, there are no plans to provide this functionality in Alteryx. CUDA applications tend to be very specialized. We do have plans for parallel scale out, but in the context of In-DB tools. The new Oracle In-DB tools will do parallel scale out within an Oracle cluster if there over 200K records. Spark will also be a parallel scale out possiblity in the near futures.

Alteryx Partner

Although naming is a little bit awkward :) there is a development in Spark Environ called

HeteroSpark: A Heterogeneous CPU/GPU Spark Platform 


So I believe setting up a spark cluster with Alteryx's partner Hortownworks tools and enabling GPU will just do fine...

I'm looking forward for developments in this area...


I sincerely believe this will enable;

  • Fast and large random forest calculations which will increase accuracy in many usecases and
  • Neural netowrks/deep learnin computations for any level analytics user...


Thanks for the answer I'm for and I'm looking forward to Spark developments then...


Alteryx Partner


There is the story of GPU usage in data mining below,

it has been only three years since then, won't that be awesome if Alteryx starts doing this...


"One difficulty with deep learning is that deep neural networks with millions of neurons take a long time to train. This was helped by a significant breakthrough in 2012 by Alex Krizhevsky, a PhD student at the time at the University of Toronto. Krizhevsky famously used the parallel computational capabilities of graphics cards (GPU’s) on a computer in his dorm room to drastically reduce training time for his convolutional neural network models. This meant he was able to train much larger models (with more layers and parameters and therefore higher representation capacity) than other researchers at the time, because he was able to obtain results in days rather than weeks.


That paper had a 10.85 p.p. lower absolute error rate than the next best result on LSVRC 2012 (winning with an error rate of 15.3% compared to the second best’s 26.2%). It was rightly hailed as a major breakthrough, and GPU training has become the standard method for training deep neural nets. I mention this story because it’s a quintessential example of a major breakthrough in a field: a new technique or idea that is broadly applicable and improves the entire field."

-Daniel Walter

ACE Emeritus
ACE Emeritus
GPUs may eventually come into play via general purpose GPU (GPGPU) compiling. Research in this area is under way. If it bares fruit, perhaps Revolution Analytics could take advantage, which could in turn benefit the R tools in Alteryx. Just a thought.
Alteryx Partner

I would definately love to see this in Alteryx... :)


Alteryx Partner
ACE Emeritus
ACE Emeritus

I just stumbled across this... R is getting closer to GPGPU...




Thanks for bring this up. Definately closer, but if you look at the gpuR R package's documentation (, you will see that binaries are not available for either Windows or OSX, currently, and my guess is for a while still, this will be a Linux only capability. Once there is a Windows port (assuming that a port is possible), then it is something we can look to bring directly into Alteryx. Some sort of In-DB approach may be possible sooner, but even that is likely still in the future.




A quick follow up. They are trying to port it to Windows, but it is currently failing in the R package build process on Windows. I looked at the build log, and while there are issues, it is further along than I expected.