Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

It would be helpful to have the Read Uncommitted listed as a global runtime setting.

Most of the workflows I design need this set, so rather than risk forgetting to click this option on one of my inputs it would be beneficial as a global setting.

For example: the user would be able to set specific inputs according to their need and the check box on the global runtime setting would remain unchecked.

However, if the user checked the box on the global runtime setting for Read Uncommitted then all of the workflow would automatically use an uncommtted read on all of the inputs.

When the user unchecks the global runtime setting for Read Uncommitted, then only the inputs that were set up with this option will remain set up with the read uncommitted.

 

There is "update:insert if new" option for the output data tool if using an ODBC connection to write to Redshift.

This option really needs to be added to the "amazon redshift bulk loader" method of the output data tool, and the write in db tool.

 

Without it means you are forced to use the "Delete and append" output which is a pain because then you need to keep reinserting data that you already have, slowing down the process.

 

Using the ODBC connection option of the output data tool to write to Redshift is not an option as it is too slow. Trying to write 200MB of data, the workflow runs for 20 minutes without any data reaching the destination table. End up just stopping the workflow.

It would be good to have the ability to select what column to use for Primary key when using the "create new table" output option of the output data tool.

 

When using the "update: insert if new" output option, you receive the error "Primary Key required for Update" if table does not have primary key.

 

Workaround is to manually create table with primary key constraint.

For the Output tool, File Format of Microsoft Excel (*.xlsx) - the non-Legacy one - it doesn't have the "Delete Data & Append" option that the Legacy ad 97-2003 Excel formats have. 

 

Having the Delete Data & Append for the most recent version of Excel would be very beneficial. Without it, there does not appear to be a way to udpate an existing Excel sheet using an Alteryx workflow while preserving the formatting within the Excel sheet. The option to Overwrite/Drop removes all formatting. 

 

I have this workflow refreshing an Excel sheet daily, and then am emailing it to a distribution at the end of the workflow. Unfortunately, right now I have to use the 97-2003 format to preserve the formatting of the Excel sheet when it is automatically refreshed and emailed each day. 

 

Can you please assess adding this option? Thanks!

Please asses the value of having hdf5 as data input. A possible workflow could be:

 

HDF5 => | Processing tasks |  => export to Tableau (.tde) or Qlik (.qnx) => vizualisation workflows

 

Thank you.

 

Regards,

Cristian.

Please add XBRL - eXtensible Business Reporting Language (https://www.xbrl.org/Wikipediahttp://www.xbrleurope.org/ ) as output file format.

 

XBRL is based on XML and is used in financial word, for example all public companies in USA send their financial reports to Stock Exchange Commison in XBRL format. (http://xbrl.sec.gov/)

In Japan  Central Bank and Financial Services Agency (FSA) are collecting financial data for banks and financial companies using XBRL format.

 

Thank you.

 

Regards,

Cristian

Please evaluate the opportunity to export Alteryx workflows as xml based GraphML format in order to be able to import them in yED, the free graph editor. (https://www.yworks.com/products/yed)

Thank you.

Regards,
Cristian

Please evaluate the opportunity to export Alteryx workflows in .dot format, the same file format used by graphviz (http://www.graphviz.org)

 

Thank you.

 

Regards,

Cristian

I recently did some extensive work on using the download tool to invoke Restful Web Services. A lot of the initial effort was around ensuring that the data being passed in the header and body for the request was as the service required. Following review of experiences on the community I used a tool called Fiddler to directly view what was being sent to identify the problems in my transformations of the data going into the Download tool. The idea is that the raw HTTP request and reply messages are available directly in Alteryx in the Results window when running a workflow, preventing the need to use another tool.

0 Likes

I would love to be able to double-click on an input or output file and have the file open. Second to that would be a clickable hyperlink to the filepath that could be used to open the file, or a "go to" button or something. Anything would be better than my current process of copying and pasting the filepath into an explorer window.

I think there should be the ability to turn on and off the “Browse Everywhere” function.  I have found that my temp drive is filling up faster than it did before this most recent addition and, while I think Browse Everywhere is fantastic for QA, I don’t necessarily need it working in every workflow I run.

 

It would be extremely useful if most Alteryx tools had the option to output error records seaparetly. This functionality is already present in most other ETL packages, even freeware ones like Pentaho - Kettle. From its wiki: http://wiki.pentaho.com/display/EAI/.09+Transformation+Steps

 

http://wiki.pentaho.com/display/EAI/.09+Transformation+Steps

 

Step error handling allows you to configure a step so that instead of halting a transformation when an error occurs, the rows that caused an error are passed to a different step.

 

 

Would be nice if Alteryx had the ability to run a Teradata stored procedure and/or macro with a the ability to accept input parameters.  Appears this ability exists for MS SQL Server.  Seems odd that I can issue a SQL statement to the database via a pre or post processing command on an input or output, but can't call a stored procedure or execute a macro.  Only way we can seem to call a stored procedure is by creating a Teradata BTEQ script and using the Run Command tool to execute that script.  Works, but a bit messy and doesn't quite fit the no-coding them of Alteryx.

I think it would be extremely helpful to have an in-DB Detour so that you could filter a user's information without having to pull it out of DB and then put it back in for more processing.  A time where this would be useful is if you have a large dataset and don't want to pull the entire dataset out of the DB because it will take a long time to pull it.  This would be applicable for filtering a large dataset by a specific state chosen by the user or possibly a region.  The Detour in the developer tools actually seems like it would do the job necessary, it just needs to connect to the In-DB tools.  

Add in-database tools for SAP HANA.

Please star that idea so we can prioritize this request accordingly

There are a number of requests for bulk loaders to DBs and Im adding MySQL to the list.

 

Really every DB connection (on prem and cloud) need some bulk loader capabilities to be added (if they don't have it already)

Our Terdata environments use LDAP authentication and we really need the Teradata bulk load connection in Alteryx to support this feature with Teradata. ODBC works fine with LDAP but Bulk Load Connection doesn't.

 

Alteryx Development has confirmed this is currently not a feature and also has put this into their backlog.

 

 

Hi currently the s3 upload tool only allows file format of *.yxdb , *.json, *.csv and *.avro

 

In order to optimize loading to redshift, it would be good to have a few more functions

1. Ability to s3 upload with *.gz format

eg: Reading in a file using the input tool -> s3 upload tool (which has a gzip function with the following options - record limit, delimiter, UTF8)  

http://docs.aws.amazon.com/redshift/latest/dg/t_loading-gzip-compressed-data-files-from-S3.html

2. Change max record limit, delimiter, UTF8 format

3. Change the objectName to 'take file/table name from field' with filename containing filename or part of filename similar to the 'Output tool'

 

Adrian

 

In-database enables large performance benefits on big datasets, it would be great to incorporate multirow and multifield formulas for use within the in-database funcions for redshift

Our DAT file structure is as follows:

 

The first line of the .DAT file must be a header row identifying the field neames.

The .DAT file must use the following Concordance default delimiters:

Comma  ASCII character (020)

Quote    ASCII character (254)

Newline  ASCII character (174)

 

Thank you,

Pete Vara

Top Liked Authors