community
cancel
Showing results for 
Search instead for 
Did you mean: 

Alteryx Designer Ideas

Share your Designer product ideas - we're listening!
Upgrade Alteryx Designer in 10 Steps

Debating whether or not to upgrade to the latest version of Alteryx Designer?

LEARN MORE

1 Review

Our submission guidelines & status definitions before getting started

2 Search

The community for a solution or existing idea before posting

3 Vote

By clicking the star in the top left corner of an idea you support

4 Submit

A new idea to suggest a product enhancement or new feature


Suggest an idea

My specific use case relates to writing to AWS but am sure there are many other use cases for federated user session token support.

 

Specifically, using the S3 Upload tool or Athena Bulk Write (via SIMBA and Athena ODBC), the configuration works when using a IAM user, access key, and secret access key but when using a federated user via Okta there is no option to enter the session token and authentication fails.

Alteryx desktop should support federated users' session tokens.

Now that we have a Snowflake Bulk Loader option, it would be great to utilize the built-in Snowflake internal staging.  This eliminates the need for an end-user to have the technical know-how or access to IT resources to utilize a separate S3 bucket and generally reduces friction in the process. 

 

There was pretty widespread support in the original Bulk Load thread: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Snowflake-Bulk-Loader/idi-p/105291/page/2#co...

Only csv is provided (and json etc) but not .xlsx

Let us know when this can be added to Alteryx

Thanks

 

  • Category Input Output

The configuration window for the Browse tool already shows Min, Max, Average, St.Dev, etc for numeric fields. If it included 'Sum' as well it would be very useful for tracking a control total for a given field through a workflow.

With the amount of users that use the publish to tableau server macros to automate workflows into Tableau, I think its about time we had a native tool that publishes to Tableau instead of the rather painful exercise of figuring out which version of the macro we are using and what version of Tableau Server we are publishing to. The current process is not efficient and frustrating when the server changes on both the Tableau and Alteryx side.

As a best practice, I'd like to automagically change any drive mapping to UNC when saving my workflows.  This applies to both local and gallery saves.

 

Cheers,

 

Mark

I've had several of my users complaining about the Visual Query Builder view after the last couple of releases. When you maximize the window, half of the screen is taken up by blank gray space and there is no way to adjust it and it's very difficult for those who do not know SQL to build their queries in this very small white space. Need to be able to adjust the gray space.

 

 

visual query builder.jpg

The idea is to store credentials, login/pw in a "credential alias".

 

Then, those credential aliases can be used in :

-traditional aliases/connection

-in database aliases/connection

-hdfs aliases/connection

-API

-on user aliases for connected controllers/gallery

...etc.

 

The idea is that I only have to change the credentials once for all the connection type (on Hive, I have the in db alias, the traditional alias and even an HDFS alias using exactly the same credentials !! and I have to change all that manually).

 

It would be really helpful to have a bulk load 'output' tool to Snowflake.  This would be functionality similar to what is available with the Redshift bulk loader.

Currently it takes a reaaally long time to insert via ODBC or would require you to write a custom solution to get this to work.

 

This article explains the general steps but some of the manual steps outlined would have to be automated to arrive at a solution that is entirely encapsulated within a workflow.

http://insightsthroughdata.com/how-to-load-data-in-bulk-to-snowflake-with-alteryx/

  • Category Input Output

In order to fully take advantage of Alteryx spatial features it would be great to have the ability of using Openstreetmap extract files natively.

While there are some sources available in SHP format they tend to be heavily cut down in detail, while the native OSM and PBF have the full feature set.

 

As it's an open format licensing shouldn't be an issue and it may pave the way to new features.

 

What do you think?

Statistics are tools used by a lot of DB to improve speed of queries (Hive, Vertica, etc...). It may be interesting to have an option on the write in db or data stream in to calculate the statistics. (something like a check box for )

 

Example on Hive : analyse {table} comute statistics; analyse {table} compute statistics for columns;

Hi All,

 

Data security is very important nowadays. There is no encryption for the output file from Alteryx Designer. 

Imagine, anyone who has Alteryx designer can open any yxdb even with the sensitive data. 

 

Suggest to add an encryption option in the Output Data tool.

 

Best Regards,

Samuel

  • Category Input Output

Directory Tool retrieves today a lot of information about a file. I must say I appreciate getting easily the size and the last write time.

But why not the owner? I have developped a macro with a powershell to do that but what a nightmare for a so little piece of information.

As of today, you can pass SQL from :
-input tool

-output tool

-connect in-db

The user interface is very limited, the kind of query you can pass also, welll, not very user-friendly. This generates a lot of frustration among users.

What do I suggest :
1/ A direct button "Query Builder" without having to open a new wf and drop an input box and then go with the presql tool and fight to build a query.

2/ Basically the same features than Dbeaver (https://dbeaver.io ) or DBvisualizer (https://www.dbvis.com/) or SQUIRREL http://squirrel-sql.sourceforge.net/ ):
  -Ability to pass any SQL Code I want (such as update, create, truncate, etc...) when I come from the button, "protected" sql when I am in a workflow
  -autocompletion

  -color coding (The idea is not new )

 
3/ A box "Free Sql Query" that I can branch on a indb or standard wf to pass any SQL query. The Output would be the same as input, just like it would be with a block until done.

Hello,

 

I work for a company with circa 250K employees. We are in the process of shifting all documents over to OneDrive and I've noticed that when I have an Alteryx workflow that uses inputs stored on OneDrive the connection can be very intermittent. I use the UNC file naming protocol for my input/directory tools, but more often than not I need to run a VBA script that accesses OneDrive before Alteryx tools will connect.

 

There's a couple of posts on this community about this, but nothing in the ideas board. I believe the SharePoint connector is being updated for v11, but nothing for OneDrive.

 

I'd like there to be better integration to OneDrive for business and SharePoint Online please.

 

Thanks,

John.

 

Idea: Allow the user to set the data type including character field width in the Text Input tool.

 

The Text Input tool currently auto-senses the correct type and width of the field in a Text Input tool. However, this sometimes restricts the usage of the data downline.

 

Examples:

1 - I often run into the situation where I've copied some data from a browse tool and then pasted that as an input to a new workflow. Then I'll turn that workflow into a macro. But then I run into an issue where the data that comes into the macro is larger than the original width in the Text Input tool. This causes problems.

 

2 - The tool senses that a field containing zip codes should be numeric and then converts the data. This corrupts the data and makes me insert a Select/Formula tool combo to pad the zeros to the left.

It would often be very useful to have the ability to search for a field in a browse too.

 

At the moment i don't think there's an easy way to manually trace data through a workflow

 

For example you have created a workflow with various Joins, filters, etc. and notice that the final output is missing data for "ABC limited". The only way to find at what step ABC limited dropped out of the workflow is to add 10 filter tools branching out from before and after each step in the workflow's logic then re-run the workflow (which might take 5-10 minutes) to see if where "ABC limited" has gone. You fix the problem "ABC ltd" didn't join to "ABC Limited", but now you want to also check for XYZ limited so you have to manually edit all 10 filter tools. It seems you have fixed the problem, but now your workflow is a mess of 10 filter tools.

 

Alternatively you could copy and paste the data from every browse tool into an excel workbook and use their search function instead, but that's obviously a cumbersome and unhelpful process, particularly as the excel sheet will have to be remade with every run of the workflow.

 

You could also use sort tools throughout before a browse tool, but that is still slow and doesn't help with cases where "ABC Ltd" is matching to "The ABC Co ltd"

 

Perhaps it would be much easier to just have a small search box in every browse tool?

 

Or is there a feature that I'm not aware of that makes this process of quality checking your workflow easier already?

 

 

It would be great if there was an option in the configuration of the Output Tool to create the output directory if it doesn't already exist. Maybe also to append instead of overwrite for all file types too?

Please enhance the input tool to have a feature you could select to test if the file is there and another to allow the workflow to pause for a definable period if the input file is locked by another user, then retry opening.  The pause time-frame would be definable for N seconds and the number of iterations it would cycle through should be definable so you can limit how many attempts to open a file it would try.

 

File presence should be something we could use to control workflow processing.  

 

A use case would be a process that runs periodically and looks to see if a file is there and if so opens and processes it.  But if the file is not there then goes to sleep for a definable period before trying again or simply ends processing of the workflow without attempting to work any downstream tools that might otherwise result in "errors" trying to process a null stream.

 

An extension of this idea and the use case would be to have a separate tool that could evaluate a condition like a null stream or field content or file not found condition and terminate the process without causing an error indicator, or perhaps be configurable so you could cause an error to occur or choose not to cause an error to occur.

 

Using this latter idea we have an enhanced input tool that can pass a value downstream or generate a null data stream to the next tool, then this next tool can evaluate a condition, like a filter tool, which may be a null stream or file not found indicator or other condition and terminate processing per the configuration, either without a failure indicated or with a failure indicated, according to the wishes of the user.  I have had times when a file was not there and I just want the workflow to stop without throwing errors, other times I may want it to error out to cause me to investigate, other scenarios or while processing my data goes through a filter or two and the result is no data passes the last filter and downstream tools still run and generally cause a failure as they have no data to act on and I don't want that, it may be perfectly valid that on a Sunday or holiday no data passes the filters.

 

Having meandered through this I sum up with the ideal being to enhance the input tool to be able to test file presence and pass that info on to another tool that can evaluate that and control the workflow run accordingly, but as a separate tool it could be applied to a wider variety of scenarios and test a broader scope of conditions to decide if to proceed or term the workflow.

 

Where it stands now, only a file input tool can be used to pull data from Google BigQuery tables. The issue here is that the data is streamed and processed locally, meaning the power of BigQuery processing isn't actually being leveraged.

Adding BigQuery In-Database as a connection option would appeal to a wide audience. BigQuery is also standard SQL compliant with the SQL 2011 standard, so this may make for an even easier integration.

Top Starred Authors