Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

It would be awesome if there was a cross tab in DB option because right now I have to stream out millions of records to build a cross tab.

Please could you enhance the Alteryx download tool to support SFTP connections with Private Key authentication as well.  This is not currently supported and all of our SFTP use cases use PK.

I noticed through the ODBC driver log that Alteryx doesn't care about the kind of base I precise. It tests every single kind of base to find the good one and THEN applies the queries to get the metadata info.

 

Here an example. I have chosen an Hive in db connection. If  I read the simba logs, i can find those lines :

Mar 01 11:37:21.318 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select USER(), APPLICATION_ID() from system.iota

Mar 01 11:37:22.863 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select USER as USER_NAME from SYSIBM.SYSDUMMY1

Mar 01 11:37:23.454 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select * from rdb$relations

Mar 01 11:37:23.546 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select first 1 dbinfo('version', 'full') from systables

Mar 01 11:37:23.707 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select #01/01/01# as AccessDate

Mar 01 11:37:23.868 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: exec sp_server_info 1

Mar 01 11:37:24.093 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select top (0) * from INFORMATION_SCHEMA.INDEXES

Mar 01 11:37:24.219 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: SELECT  SERVERPROPERTY('edition')

Mar 01 11:37:24.423 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select DATABASE() as `database`, VERSION() as `version`

Mar 01 11:37:24.635 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select * from sys.V_$VERSION at where RowNum<2

Mar 01 11:37:25.230 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select cast(version() as char(10)), (select 1 from pg_catalog.pg_class) as t

Mar 01 11:37:25.415 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select NAME from sqlite_master

Mar 01 11:37:25.756 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select xp_msver('CompanyName')

Mar 01 11:37:26.156 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select @@version

Mar 01 11:37:26.376 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: select * from dbc.dbcinfo

Mar 01 11:37:26.522 INFO  5264 HardyDataEngine::Prepare: Incoming SQL: SELECT @@VERSION;

 

 

 

 

I can understand that when Alteryx doesn't know the kind of base he tries everything.. (eg : in memory visual query builder) but here, I have selected the Hive database and I have to loose more than 5 seconds for nothing.

When creating a connection using DCM (example being ODBC for SQL) - the process requires an ODBC Data Source Name (see screenshot 1 below).

However, when you use the alias manager (another way to make database connections) - this does allow for DSN-free connections which are essential for large enterprises (see screenshot 2 below).    

 

NOTE: the connection manager screens do have another option - Quick Connect - which seems to allow for DSN-free connections, but this is non-intuitive; and you're asked to type in the name of the driver yourself which seems to be an obvious failure point (especially since the list of all installed drivers can be read straight from the registry)

 

Please could we change DCM to use the same interfaces / concepts as the alias screens so that all DCM connections can easily be created without requiring an ODBC DSN; and so that DSN-free connections are the default mode of operation?

 

 

 

Screenshot 1: DCM connection:

SeanAdams_0-1685360285460.png

 

screenshot 2

SeanAdams_2-1685360473900.png

 

cc: @wesley-siu  @_PavelP @ToddTarney 

 

 

Hi Alteryx community,

 

It would be really nice to have v_string/v_wstring and max character size as a standard for text columns.

fmvizcaino_0-1587008811932.png

it is countless how many times I found that the error was related to a string truncation due to string size limit from the text input.

 

Thumbs-up those who lost their minds after discovering that the error was that! 😄

At our organization we are required to change our passwords every few months forcing a change to my Tableau Server password.  How does this relate to Alteryx?  Well, every 90 days I have to change my password in the "Publish to Tableau Server Tool" for all of my workflows.  This is quite a cumbersome process that could be eliminated with AD.

 

If you dislike manually changing your for each workflow that uses this tool then "star" this post!

 

 

 

Hello all,

I really love the DCM feature present in the last two releases. However, I have noticed the Generic ODBC Connection is missing :

Classic Connection Manager :

simonaubert_bd_0-1656763307802.png

 

Data Connection Manager :

simonaubert_bd_1-1656763499778.png

simonaubert_bd_2-1656763537860.png

 

simonaubert_bd_3-1656763559639.pngsimonaubert_bd_4-1656763577623.png



Best regards,

Simon

 

In order to make the connections between Alteryx and Snowflake even more secure we would like to have the possibility to connect to snowflake with OAuth in an easier way. 

 

The connections to snowflake via OAuth are very similar to the connections Alteryx already does with O365 applications. It requires: 

  • Tenant URL 
  • Client ID 
  • Client Secret 

 

  1. Get Authorization token provided by the snowflake authorization endpoint.  
  2. Give access consent (a browser popup will appear) 
  3. With the Authorization Code, the client ID and the Client Secret make a call to retrieve the Refresh Token and TTL information for the tokens 
  4. Get the Access Token every time it expires 

 

With this an automated workflow using OAuth between Alteryx and Snowflake will be possible.

 

You can find a more detailed explanation in the attached document.

It would be helpful to be able to filter within the results window of a Browse tool for all "Not OK" records (records with leading/trailing spaces, embedded newlines, etc.) I can already filter for null and empty values, but this would be helpful for cleaning up data. I want to see the "dirty" data before taking out leading/trailing spaces or embedded new lines to see if there is something I'm missing in the data that needs to be further parsed or modified.

Hello,

As I mentioned in this previous idea : https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Generic-In-database-connection-please-stop-i...

 

field mapping in generic in-db connection is based on Microsoft Sql Server. Given the specificity of MSQL Server field types, I would like to change that in order to at least be able to use another database. Without that, this feature has no sense at all.

Best regards,

Simon

95% of the times I see myself using the Directory Tool, it is only to access the FullPath content, so I immediatly add a Select tool to deselect the other attributes the tool returns.

Is there any chance to add a checkbox to only retrieve FullPath?

Aguisande_0-1645640756215.png

 

I couldn't find a previous idea on this, but let me know if it already exists.

It would be great to have an option in the Output Data tool to write the workflow name to the Info properties of Excel outputs.

 

Maybe something like this:

Excel file info.PNG

 

So that whenever you open an Excel file you always have a way of finding the name of the workflow that created the file.

Excel file info 2.PNG

 

This would make it so much easier as I often have to share Excel files with colleagues and customers and then need a way of tracking them back to workflows weeks or months later.

Alteryx has the ability to connect to data sources using fat clients and ODBC but not JDBC.  If the ability to use JDBC could be added to the product it could remove the need to install fat clients.

Hi GUI Gang

 

At the moment, I have a lovely formatted XLS with corporate branding, logos, filled cells, borders etc.  The data from the Alteryx output needs to start in cell B6.  I have tried the output tools to this named range, but Alteryx destroys all the Excel formatted cells in the data block.

 

As a workaround on the forums, many Alteryx users pump out to a hidden "Output" tab, and then code =OutputA1 in the formatted sheet.  This looks messy to the users who then go hunting for the hidden tab.  Personally I end up pumping the workflow out to a temporary CSV file.  Then opening that in Excel, selecting all, and then pasting values in the pretty Excel file.

 

This is fine for one file, but I need to split the output report block by a country field and do this 100s of time for each month end.

 

Please can we have a output tool that does the same as my workaround.  Outputs directly from a workflow to a range in Excel that doesnt destroy the workbook's formatting.

 

Jay

Extend the MongoDB tool to work with Atlas MongoDB instances.

It would be great if there was an option in the configuration of the Output Tool to create the output directory if it doesn't already exist. Maybe also to append instead of overwrite for all file types too?

When using the output data tool, it would save me and my cluttered organizational skills a lot of effort if the writing workflow was saved as part of the yxdb metadata. 

I've often had to search to find a workflow which created the yxdb. I tend to use naming conventions to help me,  but it would be easier if the file and or path was easily found. 

cheers,

 

 mark

This has probably been mentioned before, but in case it hasn't....

 

Right now, if the dynamic input tool skips a file (which it often does!) it just appears as a warning and continues processing. Whilst this is still useful to continue processing, could it be built as an option in the tool to select a 'error if files are skipped'? 

 

Right now it is either easy to miss this is happening, or in production / on server you may want this process to be stopped.

 

Thanks,

 

Andy 

 

 

Directory Tool retrieves today a lot of information about a file. I must say I appreciate getting easily the size and the last write time.

But why not the owner? I have developped a macro with a powershell to do that but what a nightmare for a so little piece of information.

Currently, when one uses the Google BigQuery Output tool, the only options are to create a table, or append data to an existing table.  It would be more useful if there was a process to replace all data in the table rather than appending. Having the option to overwrite an existing table in Google BigQuery would be optimal.

Top Liked Authors