The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

0 Likes

Please add a tool to edit different cells in table randomly and update the source after editing. Similar to the "Edit Top 200 rows in SQL". That would be very much helpful

0 Likes

Please add the "Don't Output Input Objects" option. 

 

It would be nice to eliminate the input objects when the processed output object is all that is desired.  When processing spatial data, keeping the input objects can lead to massive amounts of unnecessary data in the output data stream.

On the Reporting palette, the Map Legend Builder tool has an extra "the" in the tooltip. I have enclosed an image below. Full disclosure: it isn't a bug, it doesn't affect functionality, and it's trivial. This is version 2018.3.4.51585.

 

alteryxBug.PNG

HI,

 

Would love the option to pass a field to the AWS S3 Connector for the Key & Secret Access key. We are building an extensive datamart using AWS S3 and instead of manually changing keys each quarter (per our security) manually in each workflow & tool, we'd rather change it in one spot and have it filter into all affected workflows. 

 

Thanks.

0 Likes

The silent install for the census data I not completing successfully.  It appears that it can't create folders.  When attempting to run the command: DataInstallcmd.exe /s /install all /log "C:\temp\logs\alteryx.txt"

I received an error in the log:  Install failed: Directory: C:\Program Files (x86)\Alteryx\DataProducts\ does not exist

I received this error regardless of how I attempted to run this command, as a user with admin permissions, as a user with admin permissions running the command as an administrator, from an elevated command prompt, etc. It was not until I manually created the directory via the following command that the silent install ran successfully. 

 

mkdir "C:\Program Files (x86)\Alteryx\DataProducts"

Do we have a capability to read number or may be text for that purpose from a PDF format file as an input to Designer Tool we have a system which actually produces a PDF report and then we manually have to segregate the issues according to the categories. Can some one please suggest if there is a solution existing or in working stage

I love how the new (as of 2018.3) Python tool has a Jupyter notebook in the config panel. Jupyter is great, has a lot of built-in help, and is so robust that there is really no need for an external installation of any Python IDE anywhere else.  I would love to see that with the R tool as well.  For now (as of 2018.3), it's much easier to develop R outside of Alteryx, e.g. in R Studio, and then copy the code in the R tool.

 

Therefore, this request is to implement R just like Python, using Jupyter. This would allow us to script it and see our visualizations (etc) right in our Jupyter window.  It would eliminate the need to have R-Studio off on the side.  Here are a couple links that may hint how to make it happen:

 

Hope you can make it happen -- thanks!

I have created a workflow that runs on Alteryx Server for a group of users.  If two people were to run the workflow consecutively, chances are one could overwrite the other's file.  It would be great when the user adds a description to the Optional Job Name field before hitting run, then that text could be included in the workflow and then used as a pre-append title for the file name.

 

One way to get around this could be to create a manual input that replaces a dummy string hard coded into the flow.  A lot more moving parts, through.

Hi All,

 

Data security is very important nowadays. There is no encryption for the output file from Alteryx Designer. 

Imagine, anyone who has Alteryx designer can open any yxdb even with the sensitive data. 

 

Suggest to add an encryption option in the Output Data tool.

 

Best Regards,

Samuel

Love the functionality to create filters on the Calgary database but it would be nice to be able to select the columns you wanted returned. There are times where you only want a couple columns but the input tool will return all columns creating a larger dataset then required. You can add a select right after the input but this is after the entire dataset has been loaded into memory. Combining the two would make the Calgary input tool behave more like a database then a standard "dumb" input source. 

Was thinking with my peers at work that it might be good to have join module expanded both for desktop and in-database joins.

 

As for desktop join: left and right join shows only these records that are exclusive to that side of operation. Would it be possible to have also addition of data that is in common?

As for in-db join: db join acts like classic join (left with matching, right with matching data). Would it be possible to get as well only-left, only-right join module?

 

 

My users love having the ability to pick objects from a reference file in the Map tool in the Interface palette.  However, usually they need to pick objects that are interspersed amongst others.  The Control + Left Click works great, until they pick an incorrect object.  The only option is to clear the selection and start over.

 

Please add something as simple as Control + Left Click on a selected object will deselect it.

 

 

Best,

David

 

0 Likes

On the output data component, when outputting to a database, there are Pre & Post Create SQL statement switches.  This allows for the execution of a SQL statement in the DB but only after a create table action.  To use this functionality you must have populated a table using the 'create new table' or 'overwrite' option, which is no good if you're using the Update or Append options.  Change these switches to Pre & Post SQL Statements as per the Input Data component.  This will allow you to execute SQL statements in the DB either before or after the population action.  Can be really handy if you need to execute database stored procedures for example.  

 

Please note:  You can already work around this by branching from your data stream and creating or overwriting a dummy table, then use the pre or post create switch for your SQL statement.  This is a bit messy and can leave dummy tables in the DB, which is a bit scruffy.

Some of the workflows I use have multiple inputs that can take a long time to initially load. The new cache function itself has been amazing, but there is one big drawback for me: I can't cache multiple tools at the same time.  Alteryx will allow me to eventually cache all of the tools I want cached, but it will take multiple times running the file.  This still saves me time in the end, but it feels a bit cumbersome to set up.

In response to my question here: https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Singin-Error-to-Tableau-server-using-P...

 

The Publish to Tableau Server does not support SAML/SSO. I would like this feature to be added to this tool as it will make our business process more efficient.

 

Thank you.

0 Likes

When outputting to files in avro format, it would be nice to have Alteryx either throw an error/warning or automatically add a prefix when field names do not conform to the Apache Avro specifications. For example, if I were to try to output an .avro file with a field named "2018 actions", Alteryx could throw a warning/error to remind me to rename the field, or Alteryx could change the field name for me to something like "X2018_Actions".

It would be really helpful if Alteryx server could connect directly to files on cloud file storage such as Dropbox, Box and OneDrive.  For example; a workflow could access specific source files or a folder with multiple files stored on Dropbox and could run the workflow against those files and then write the output to another folder on Dropbox.  We are making less and less use of internal file servers, so accessing files directly from the cloud allow for additional deployment scenerios and flexibility. 

0 Likes

Add extra capabilities int he Output Tool so that the following can be accomplished without the need for using Post SQL processing:

 

- Create a new SQL View

- Assign a Unique Index key to a table

 

Having these capabilities directly available in the tools would greatly increase the usability and reduce the workload required to build routines and databases.

 

Google has recently introduced the concept of Team Drives.
 
 
Team Drives are shared spaces where teams can easily store, search, and access their files anywhere, from any device.
Unlike files in My Drive, files in a Team Drive belong to the team instead of an individual. Even if members leave, the files stay exactly where they are so your team can continue to share information and get work done.
I cannot see the files through the Google Sheets Input tool. I only see files in My Drive.
 
How do I access Google Sheets placed into Team Drives? Thanks in advance.
 
Response from Alteryx Support:
 

Hi Chris,

This is not supported with the current tool. The way that we pull the list of sheets only will pull the personal sheets that the user owns, even if a user creates a sheet in the Team Drive, the owner is not that user but the Team Drive. There are additional parameters that need to be added in order to pull the team drive sheets a user has access to, and we would need to use a different api (Drive API vs Sheets API) for the pull.

Please add this as a suggestion for a future enhancement on our idea center.
https://community.alteryx.com/t5/Alteryx-Designer-Ideas/idb-p/product-ideas

Thank you.


--
Angela Ogle | Customer Support Engineer

Currently, when one uses the Google BigQuery Output tool, the only options are to create a table, or append data to an existing table.  It would be more useful if there was a process to replace all data in the table rather than appending. Having the option to overwrite an existing table in Google BigQuery would be optimal.

Top Liked Authors