This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Alteryx Gallery is experiencing a problem in which system emails are not being sent out. As a result, if you are attempting to sign up for a new account, you may be unable to verify your email address. We are working to solve this as soon as possible and will remove this notice once resolved.
@Atabarezz, there are no plans to provide this functionality in Alteryx. CUDA applications tend to be very specialized. We do have plans for parallel scale out, but in the context of In-DB tools. The new Oracle In-DB tools will do parallel scale out within an Oracle cluster if there over 200K records. Spark will also be a parallel scale out possiblity in the near futures.
There is the story of GPU usage in data mining below,
it has been only three years since then, won't that be awesome if Alteryx starts doing this...
"One difficulty with deep learning is that deep neural networks with millions of neurons take a long time to train. This was helped by a significant breakthrough in 2012 by Alex Krizhevsky, a PhD student at the time at the University of Toronto. Krizhevsky famously used the parallel computational capabilities of graphics cards (GPU’s) on a computer in his dorm room to drastically reduce training time for his convolutional neural network models. This meant he was able to train much larger models (with more layers and parameters and therefore higher representation capacity) than other researchers at the time, because he was able to obtain results in days rather than weeks.
That paper had a 10.85 p.p. lower absolute error rate than the next best result on LSVRC 2012 (winning with an error rate of 15.3% compared to the second best’s 26.2%). It was rightly hailed as a major breakthrough, and GPU training has become the standard method for training deep neural nets. I mention this story because it’s a quintessential example of a major breakthrough in a field: a new technique or idea that is broadly applicable and improves the entire field."
GPUs may eventually come into play via general purpose GPU (GPGPU) compiling. Research in this area is under way. If it bares fruit, perhaps Revolution Analytics could take advantage, which could in turn benefit the R tools in Alteryx. Just a thought.
Thanks for bring this up. Definately closer, but if you look at the gpuR R package's documentation (https://cran.r-project.org/web/packages/gpuR/index.html), you will see that binaries are not available for either Windows or OSX, currently, and my guess is for a while still, this will be a Linux only capability. Once there is a Windows port (assuming that a port is possible), then it is something we can look to bring directly into Alteryx. Some sort of In-DB approach may be possible sooner, but even that is likely still in the future.
A quick follow up. They are trying to port it to Windows, but it is currently failing in the R package build process on Windows. I looked at the build log, and while there are issues, it is further along than I expected.