Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Preface: I have only used the in-DB tools with Teradata so I am unsure if this applies to other supported databases.

 

When building a fairly sophisticated workflow using in-DB tools, sometimes the workflow may fail due to the underlying queries running up against CPU / Memory limits. This is most common when doing several joins back to back as Alteryx sends this as one big query with various nested sub queries. When working with datasets in the hundereds of millions and billions of records, this can be extremely taxing for the DB to run as one huge query. (It is possible to get arround this by using in-DB write out to a temporary table as an intermediate step in the workflow)

 

When a routine does hit a in-DB resource limit and the DB kills the query, it causes Alteryx to immediately fail the workflow run. Any "temporary" tables Alteryx creates are in reality perm tables that Alteryx usually just drops at the end of a successful run. If the run does not end successfully due to hitting a resource limit, these "Temporary" (perm) tables are not dropped. I only noticed this after building out a workflow and running up against a few resource limits, I then started getting database out of space errors. Upon looking into it, I found all the previously created "temporary" tables were still there and taking up many TBs of space.

 

My proposed solution is for Alteryx's in-DB tools to drop any "temporary" tables it has created when a run ends - regardless of if the entire module finished successfully. 

 

 

Thanks,

Ryan

When using Server datasources Alteryx can take a long time to query metadata, particularly for long complex queries. This often happens on opening a module or clicking on the Input tool to eit properties.

 

This leads to frustration with these modules. It would be good for Alteryx to cache the metadata (i.e. columns) from these inputs and prompt the user to reuse this cached data if it it takes longer than say 2 seconds to retrieve the query.

 

 

Given redshift prefers accepting many small files for bulk loading into redshift, it would be good to be able to have a max record limit within the s3 upload tool (similar to functionality for s3 download)

 

The other functionality that is useful for the s3 upload tool is ability to append file names based on datetimestamp_001, 002, 003 etc similar to current output tool

Hi,

 

Carlson Companies is moving to a Vertica environment and it would be great if that was supported with the In-database tools. That would definitely help and expand the use of Alteryx at our company!

 

Thanks,

 

Tyler Mittelstadt

At the moment, we are not able to use input data field names and its values in Output tool, mainly in the Pre-SQL and Post-SQL statement. I see some discussions on this in the community and in many scenarios we require that. It will be great if we have this option.

I think it would be great to add metadata to a yxdb. For example, I was back tracking and trying to figure out which module/app I used to create an old yxdb. Now I use Notepad++ and do a "Find In Files" Search. Wouldn't it be great it the module path would be available when you look at the properties of a yxdb in Alteryx?

It would be cool if a connector line would turn red when you select it, making it easier to trace the path (similar to how the lines turn red when you click on a join tool).

Is it possible to add some color coding to the InDB tool.  I am building out models InDB and I end up with a sea of navy blue icons.  Maybe they could generally correspond to the other tools.   For example the summary would be orange.  Etc  Formula Lime Green.

I have several .yxdb files that I’ve been appending to daily from a SQL Server table in order to extend the length of time that data is retained.

They’re massive tables, but I may only need one or two rows. 

I had hoped to decrease the time it takes to get data from them by running a query on them (or a dynamic query/input) as opposed to using a filter or joining on an existing data set which would have equal values that would produce the same result as a filter.

Essentially, the input of .yxdb would have the option of inputting the full table or a SQL query just like a data connection.

I run a report generator that can leave you with having to save one at a time several files.

 

Popcorn.jpg

It would be nice to be able to save multiple files at once.  Whether using a check box or a shift select method.  If this method already exists, Great!  Where can I find out how to do it.  If not can it be an added feature?

It is very difficult moving from Alteryx functions to SQL In-Database as a business user, I need to learn a whole new language.

 

In the short term Alteryx should provide a simple function reference, as similar as possible to the Formula tool, for building formula in the in-database tools.

 

Longer term I'd like there to be a parser from Alteryx Formulae to SQL so I can just write in my favourite Alteryx formula (or a subset thereof) and Alteryx handles the conversion to SQL. 

The challenge:

We have hundreds of SOAP based Salesforce (SF) connectors in our scheduled modules that were created with Alteryx 9.0-9.5. Alteryx 10.0+ is now using REST API based SF connectors. We have to replace all of these connectors when we move to 10.0+.

 

Proposed idea:

Alteryx creates an automated process for converting SOAP SF connectors to REST API SF connectors, so that when you open an old module in 10.0+, they are automatically updated.

 

This seems feasible as the information supplied by Alteryx users for the SOAP SF connectors is sufficient for the REST API SF connectors to work (i.e. URL, username, password, security token, table name, fields, WHERE clause, etc...).

 

Thanks,

Jeremy

Its definately not a good UX that the full browse is now in the output window.  I usually have my Output on autohide and its a few extra clicks to see the browses now... Can we have both the Browse Everywhere tab in Output and Configuration Panel?

I need to consume a web service that uses a “0-legged” OAuth transaction. I contacted Alteryx tech support and found that this was not possible with the current feature set of the Download tool. Please add it.

I was recently surprised to find that Alteryx doesn't already havea connector to upload to SFTP sites.  I've managed to work around it with RunCommand and some external programs, but it's very cumbersome.  A simple SFTP upload connector would be a great addition to Alteryx.

At TargetSmart, we create a lot of CSV deliverables for our customers. Since Alteryx differentiates between blank strings and null values (a good thing), the CSV output is not consistent between the two without an explicit multi-field formula step to set all null to empty strings (or vice versa). This is an easy fix for us. However, in some cases we have very large files with thousands of fields and millions of records. For these instances, the workflow run-time is greatly increased by the multi-field formula. If possible, I was wondering if adding a checkbox option to CSV output steps (“Make null/empty consistent” or “Never quote empty/null values”) would possibly be a more efficient approach as the check could be part of the output step (which I assume is native C++) versus the Multi-field formula (which I assume has some level of inefficiency in interpreting the formula dynamically).

I know you can add a field for "today" and then use that field to append the filename, so the output ends up as Ouput_Date.xlsx, but it would be great to be able to do that without adding a new field for the current date. If it were simply an option in the filt output settings dialog, that would be great.

The capability to input/output R Datasets via the input/output tools, together with all the other data formats as well (like csv, Excel, SAS, SPSS, etc).

I have used the SharePoint List Connectors with our SharePoint 2010 (on prem) instance for some time now. It works great and has become invaluable. Unfortunately, I have been told that these connectors do not work with a Cloud instance or an instance that is not on prem. We need this capability since Microsoft is pushing coporations to move to the cloud instance and there is talk that on-prem may not be available past the 2016 version that is coming soon. Many companies including mine have either completed or are close to completing a full migration which has rendered the current SharePoint Connectors useless. While this is the most important part, another piece that is missing is a SharePoint Document Library connector (similar to the Amazon S3 Download/Upload). Currently I must use the UNC path to my SharePoint folders and an easier more reliable way to save files out to OneDrive and SharePoint Online would be very beneficial.
Very nice to be able to extend Alteryx with R programs or CMD execution.  Please, please, please add a connection to Python!
Top Liked Authors