The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

The summarize tool have drag drop facility and cross checking and suggestion on the type of aggregation that can be applied based on the data type.

 

e.g. Let there be two different stack. One to be used for Group By. Another for aggregation.

 

We should be able to drag fields to these sections.

 

Now when we are dragging something to the Aggregation stack, based on the data type, a small suggestion list of possible aggregation to choose from.

 

And a small validation of the data type to aggregation if we are defining the aggregation manually.

 

I can provide mock ups if anyone is interested.

Hello all,

 

It will be great if there is an option to specify sql statement or delete based on condition in write In-DB tool. We have to delete all record even though when we are trying to delete and append only a subset of records. If it allows for "WHERE" statement atleast, it will be very much useful. I have a long post going on about this requirement in http://community.alteryx.com/t5/Data-Preparation-Blending/Is-there-a-way-to-do-a-delete-statement-in... .

 

Regards,

Jeeva.

When converting data types while In-DB, it would be really helpful if I could change the data type with the "Select In-DB" tool in a similar manner to the "Select" tool. Currently, we are having to use the "Formula In-DB" tool in order to create a "Cast" Statement.

Would be nice if Alteryx had the ability to run a Teradata stored procedure and/or macro with a the ability to accept input parameters.  Appears this ability exists for MS SQL Server.  Seems odd that I can issue a SQL statement to the database via a pre or post processing command on an input or output, but can't call a stored procedure or execute a macro.  Only way we can seem to call a stored procedure is by creating a Teradata BTEQ script and using the Run Command tool to execute that script.  Works, but a bit messy and doesn't quite fit the no-coding them of Alteryx.

I think it would be extremely helpful to have an in-DB Detour so that you could filter a user's information without having to pull it out of DB and then put it back in for more processing.  A time where this would be useful is if you have a large dataset and don't want to pull the entire dataset out of the DB because it will take a long time to pull it.  This would be applicable for filtering a large dataset by a specific state chosen by the user or possibly a region.  The Detour in the developer tools actually seems like it would do the job necessary, it just needs to connect to the In-DB tools.  

Add in-database tools for SAP HANA.

Please star that idea so we can prioritize this request accordingly

There are a number of requests for bulk loaders to DBs and Im adding MySQL to the list.

 

Really every DB connection (on prem and cloud) need some bulk loader capabilities to be added (if they don't have it already)

In-database enables large performance benefits on big datasets, it would be great to incorporate multirow and multifield formulas for use within the in-database funcions for redshift

Preface: I have only used the in-DB tools with Teradata so I am unsure if this applies to other supported databases.

 

When building a fairly sophisticated workflow using in-DB tools, sometimes the workflow may fail due to the underlying queries running up against CPU / Memory limits. This is most common when doing several joins back to back as Alteryx sends this as one big query with various nested sub queries. When working with datasets in the hundereds of millions and billions of records, this can be extremely taxing for the DB to run as one huge query. (It is possible to get arround this by using in-DB write out to a temporary table as an intermediate step in the workflow)

 

When a routine does hit a in-DB resource limit and the DB kills the query, it causes Alteryx to immediately fail the workflow run. Any "temporary" tables Alteryx creates are in reality perm tables that Alteryx usually just drops at the end of a successful run. If the run does not end successfully due to hitting a resource limit, these "Temporary" (perm) tables are not dropped. I only noticed this after building out a workflow and running up against a few resource limits, I then started getting database out of space errors. Upon looking into it, I found all the previously created "temporary" tables were still there and taking up many TBs of space.

 

My proposed solution is for Alteryx's in-DB tools to drop any "temporary" tables it has created when a run ends - regardless of if the entire module finished successfully. 

 

 

Thanks,

Ryan

Hi,

 

Carlson Companies is moving to a Vertica environment and it would be great if that was supported with the In-database tools. That would definitely help and expand the use of Alteryx at our company!

 

Thanks,

 

Tyler Mittelstadt

Is it possible to add some color coding to the InDB tool.  I am building out models InDB and I end up with a sea of navy blue icons.  Maybe they could generally correspond to the other tools.   For example the summary would be orange.  Etc  Formula Lime Green.

It is very difficult moving from Alteryx functions to SQL In-Database as a business user, I need to learn a whole new language.

 

In the short term Alteryx should provide a simple function reference, as similar as possible to the Formula tool, for building formula in the in-database tools.

 

Longer term I'd like there to be a parser from Alteryx Formulae to SQL so I can just write in my favourite Alteryx formula (or a subset thereof) and Alteryx handles the conversion to SQL. 

Top Liked Authors