This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
My users love having the ability to pick objects from a reference file in the Map tool in the Interface palette. However, usually they need to pick objects that are interspersed amongst others. The Control + Left Click works great, until they pick an incorrect object. The only option is to clear the selection and start over.
Please add something as simple as Control + Left Click on a selected object will deselect it.
I like the new cache option in 2018.3, but I would like it to function a little bit different. Let's say you cache at a certain point and then continue to build after that. If I reach another checkpoint and want to cache, it currently re-runs the entire workflow (ie it ignores my cache upstream and just goes back to the beginning of the workflow); instead, I would rather have it utilize the upstream cache. Personally, caching is usually an iterative effort during development where I keep caching along the way. The current functionality of the cache is not conducive to this. Thanks!
We now have the ability to output to an ESRI File Geodatabase, which is great, but it only allows you to output it to the WGS84 coordinate system. I would like to have the same functionality to export it to other projections or coordinate systems similar to the ESRI Shapefile or ESRI Personal Geodatabase output tools (we specifically need NAD83 but I'm sure others would like other options as well).
Currently, when one uses the Google BigQuery Output tool, the only options are to create a table, or append data to an existing table. It would be more useful if there was a process to replace all data in the table rather than appending. Having the option to overwrite an existing table in Google BigQuery would be optimal.
Hi, I have searched through the community, and I wasn't able to find a duplicate for this idea. If in fact there is, I apologize and please point me to that post. I think that it would be a good idea to have date options in the summarize tool that would allow for grouping at higher levels of the date. I often have a date field that is specific to the day (i.e. 2018-01-01), and I just want to group by the year or month. Currently in order to do this, I have to create a formula before the summarize tool that formats the date according to how I want to group it, and then I am able to group off that field in the summarize tool. It would be nice if in the summarize tool, I could select the date field, and then have the option to group it at year, month, week, etc.
Probably more of a bug. Not sure if this annoyes anyone else, but when a running workflow in a different tab completes, the current windows focus is gone, even you have the pop-up notification disabled. Check the video and see what happens at 0:11 when tab 105 finishes running while I'm typing a super complicated code
Hopefully this is the right place to post this and it hasn't been suggested already but I think it would be useful to add a numeric indicator to the formula tool to show how many formulas are being done with one tool. It would be useful when going back into or sharing workflows that a user would know more than one function is being carried out at that point. Currently I change the annotation to show how many but I think it would be useful if the icon changed dynamically. Below is a mockup of what I think it should look like.
I'm not sure if this will ever be possible, but I know that it would greatly benefit me and I'm sure thousands of other users. In my work place I am constantly working in a conference room and at my desk. At my desk I am wired into an Ethernet connection while in the conference room I am wireless. When I start my workflows after working with my team in the conference room, I can't go back to my desk until the workflow is finished running because I am changing internet connections and I lose connection to the databases. With the pause button it would become possible to run a workflow and then change my internet without losing connection to the databases.
Another use for this would be while testing a workflow with a new tool. There are times I run a workflow that can take a few hours, but then I realize there is a mistake somewhere in my workflow, where the data hasn't reached yet. I think it would be very helpful to be able to pause the workflow and add the new tool in, while seeing results from tools it has already passed through.
But yet again this is just an idea that relates to me, I wonder what the rest of the community thinks.
I like the new cache option in 2018.3, but I would like a user setting added that would allow me to 1) write the cache files to a local drive and 2) have them persist when I re-open Alteryx. Currently, the files are written to the user defaulted temp space and don't persist when Alteryx is closed down. Thanks!
A common problem with the R tool is that it outputs "False Errors" like the following: "The R.exe exit code (4294967295) indicted an error"
I call this a false error because data passes out of the R script the same as if there were no error. As such, this error can generally be ignored. In my use case, however, my R tool is embedded within an iterative macro, and the error causes the iterator to stop running.
I was able to create a workaround by moving the R tool to a separate workflow and calling it from the CReW runner macro within my iterator, effectively suppressing the error message, but this solution is a bit clumsy, requires unnecessary read/writes, and uses nonstandard macros.
When a tool container is disabled, I'd like the lines that are going into it to be different from "enabled" lines.
They could be grey or dotted for example.
When working on a workflow and disabling containers, I find that the lines entering disabled containers become confusing or cluttering. It would be much simpler to focus my attention efficiently if lines that remain enabled could be distinguished quickly.
I would like Alteryx to create an internship support program that provides a license similar to a trial but for an extended period, say 6 to 8 weeks, and tied to core certification. you could repackage much of the existing training into a curriculum aimed at educating new users sufficiently on the elements necessary to pass the Core certification within a short time frame.
Our organization just launched an internship program and had our first group of interns start 5 weeks ago. I had to come up with a plan that provided the intern a valuable experience. I decided to make Alteryx Core certification a key objective and put him on a spare license we had for the duration and worked with him to get his core.
I think this could be a great marketing tool for Alteryx. It would get more people entering the workforce educated about your product so that no matter where they end up they might already be a fan and suggest the tool as a solution in a new job that doesn’t currently know about you. Conversely it gives interns a certification that shows they know more than the other applicants for a job where Alteryx is already a tool. I am sure there are tax benefits to Alteryx as well for each license used.
This is kind of how we discovered Alteryx, we had issues with volume of data and technology limitations (Excel) and someone had used Alteryx at a prior company and suggested we try it out. We purchased a couple licenses, then within a couple years we had 16 licenses. You can’t sell someone who doesn’t know you exist…the internship type license is a good idea to expand the list of people in the workplace who know you exist. Even better they will have have reached a level of knowledge, core certification, to have a basic appreciate your value.
It would be really helpful if Alteryx server could connect directly to files on cloud file storage such as Dropbox, Box and OneDrive. For example; a workflow could access specific source files or a folder with multiple files stored on Dropbox and could run the workflow against those files and then write the output to another folder on Dropbox. We are making less and less use of internal file servers, so accessing files directly from the cloud allow for additional deployment scenerios and flexibility.