Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Hi there Alteryx team,

 

When we load data from raw files into a SQL table - we use this pattern in almost every single loader because the "Update, insert if new" functionality is so slow; it cannot take advantage of SSVB; it does not do deletes; and it doesn't check for changes in the data so your history tables get polluted with updates that are not real updates.

 

This pattern below addresses these concerns as follows:

- You explicitly separate out the inserts by comparing to the current table; and use SSVB on the connection - thereby maximizing the speed

- The ones that don't exist - you delete, and allow the history table to keep the history.

- Finally - the rows that exist in both source and target are checked for data changes and only updated if one or more fields have changed.

 

Given how commonly we have to do this (on almost EVERY data pipe from files into our database) - could we look at making an Incremental Update tool in Alteryx to make this easier?    This is a common functionality in other ETL platforms, and this would be a great addition to Alteryx.

 

 

SeanAdams_0-1643148983216.png

 

My team uses a shared macro repository (say F:\AlteryxMacros), and we recently ran into an issue with the default save location for macros. While we save most macros to our repository, there are times when folks save their macros elsewhere (let's say C:\MyAwesomeWorkflow). The issue we've encountered is that if you go to file >> save as with a macro, it will ALWAYS default to the macro repository, even when my macro is currently saved elsewhere (C:\MyAwesomeWorkflow). Speaking for a friend, people have accidentally saved things to the macro repository by accident. Or, they waste time navigating from the macro repository to the their current folder.

 

If a macro is saved somewhere, please change the file >> save as to default to the current folder. Thanks! 

1.  I would like the tab color/contrast of the active tab to be more prominent / discernible.  It does not really stand out. 

2.  I would like the ability to set default colors for different open workflow types -- standard workflow, macro, analytical app, so I can use color to quickly distinguish between tab dependencies or simply what is what.

3.  I would like the ability to change the color of any tab at any time (similar to Tableau Desktop, but with greater color choice).

 

 

 

AD/LDAP Authentication should be an option for the Mongo tool, and the ability to use Gallery Connections would also be great. Local SQL authentication is no longer allowed in most enterprises to simplify security configuration control.

Right now in order to pass a parameter to pre/Post SQL we need to make a macro as a work around.

It would be really great if this was a native capability for the output tool so we don't have to replicate all the output too fields as macro inputs

The option to open Hyper files in 2019.4 is great! For some of our use cases it would be even better, if we would be able to directly open Hyper files that have been published to Tableau Server.

 

It should be possible to achieve this by combining the Tableau REST API method Download Data Source, which returns a Tableau Packaged Data Source (.tdsx), which then would need to be converted to a Zip file to be able to navigate to the contained Hyper file.

I find the Run Command tool to be counter-intuitive: rather than supplying a required I/O parameter (in at least one of "Write Source" and/or "Read Results"), I would rather just use a "Block Until Done" approach to 1. write file, 2. issue custom system command, 3. read file.  An even simpler example is the case where I don't need I/O to/from the system command... in that case, I just want to issue the command, nothing more.  But the current tool will require me to specify a dummy file, which is counter-intuitive and also leaves that unnecessary file somewhere.

 

To fix this up without breaking existing user implementations, the "idea" is:

  • Do not require either "Write Source" or "Read Result" ... allow both to be blank.
  • Allow (but don't require) any of "Command," "Command Arguments," and "Working Directory" to be dynamically populated from fields in the data streamed into the tool.

So... any existing user implementation should be unnaffected... but these changes would allow users to implement system commands in a more intuitive manner, and even allow for very dynamic system commands based on the workflow.

 

Thanks!

First of, let me say that I really love that the render tool adds commas to your numbers when you output them to excel. You can even control the number of decimals!

 

However, there are those times that I wish I could turn the commas off. For example, I have a column that represents years. In this case, I want it to be a number, but I don't want commas. I can see this xml coming out of my table tool:

 

.de41ddeb2857c4579b858debce63bfbec tbody .column0 { numeric:true; decimal-places:0; } 

 

I would love an additional item like: separator:false that could be set in the table tool to shut off the separator. I've mocked up the table tool here:

patrick_digan_0-1600260333288.png

In my limited knowledge, I'm guessing Alteryx would need to change/enhance the way their pcxml is structured.

 

Hi 

I'm really missing a search in the medata phane?

If I am on data phane:

Hamder83_0-1658922640426.png


If im browsing though metadata:

Hamder83_1-1658922660398.png



I like the new cache option in 2018.3, but I would like a user setting added that would allow me to 1) write the cache files to a local drive and 2) have them persist when I re-open Alteryx. Currently, the files are written to the user defaulted temp space and don't persist when Alteryx is closed down. Thanks!

It would be great to increase the size of the content displayed in the results window. I use it primarily to exlore data and with my insufficiently good eyesight this is a challenge. Some non-Alteryx solutions were proposed before but I feel they are not sustainable in the long run.

 

Best 

Teba

When moving a tool container, all of the tools within it become mis-aligned with the canvas grid.  Moving any single tool immediately re-aligns it to the grid, which puts it out of alignment with the rest of the tools in the container.

 

Example:  Put 3 tools in a row in a tool container, all aligned horizontally.  Next, move the container.  Now, move the middle tool, then try to place it back in alignment with the other two.  You won't be able to, because they are out of alignment with the canvas grid.

 

Please fix this.  

It would be great if we could set the default size of the window presented to the user upon running an Analytic App. Better yet, the option to also have it be dynamically sized (auto-size to the number of input fields required).

I've been spending some time looking at low-code app development platforms, and one of the features that these have which it would be great to see added into Alteryx Analytic Apps is the ability to display results directly in the app interface pane.

 

At the moment when an app successfully runs the results can be shown in a pop-out window, as shown below:

 

cgoodman3_0-1606217762763.png

 

An example from a low code built app is this:

 

cgoodman3_1-1606217844491.png

 

Therefore the new feature it would be great to add is a browse result window within the interface tool, or a way to render the results and display that in the window.

 

cgoodman3_3-1606218194899.png

 

Looking forward to hearing from others and what else have you seen with web apps that it would be great to improve Alteryx Analytic Apps with?

 

 

Please add a configuration to the RedShift bulk load to EITHER use access keys or an IAM EC2 role for access. 

 

We should not have to specify access keys when we are in an IAM enabled environment.

 

Thanks

It would be really helpful to have a bulk load 'output' tool to Snowflake.  This would be functionality similar to what is available with the Redshift bulk loader.

Currently it takes a reaaally long time to insert via ODBC or would require you to write a custom solution to get this to work.

 

This article explains the general steps but some of the manual steps outlined would have to be automated to arrive at a solution that is entirely encapsulated within a workflow.

http://insightsthroughdata.com/how-to-load-data-in-bulk-to-snowflake-with-alteryx/

I understand the difficulties of making Alteryx Designer and Alteryx Server available for Linux but there are so many organizations and possibilities for development and scaling on Linux environments. It would be incredibly helpful if Alteryx was available on Linux. Please strongly consider.

DELETE from Source_Data Where ID in

SELECT ID from My_Temp_Table where FLAG = 'Y'

 

.... 

 

Essentially, I want to update a DB table with either an update or with the deletion of rows.  I can't delete all of the data.  My work around will be to create/insert into a table the keys that i want to delete and try to use a input/output tool with SQL that performs the delete.  Any other suggestions are welcome, but a tool is best.

 

Thanks,

Mark

To add the capability to hard rename the columns in all modes 

Would like to be able to reference the UserID of the person running the workflow within the workflow itself, usually for authentication purposes.

 

For example, we use the Publish to Tableau Server tool. The main developer will embed their password in the tool and then publish it to Gallery. We are wanting to authenticate if the person running the workflow on Gallery can actually publish to Tableau Server before publishing, not just the person who published the workflow in the first place. 

 

Another example is that we are needing to upload data to our data lake through APIs and need to pass in user information of who is publishing to that package through Alteryx, and check that they can indeed publish there. 

 

Basically, we need to have logic within the workflow that is referencing who is running the workflow. 

 

We understand that this would most likely only be supported when workflows are run on Gallery, as there isn't a UserID tied to someone when running on a local machine. 

Top Liked Authors