Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Tn the test environment in our POC for Alteryx - SharePoint connection,
the connection request from SharePoint Input Tool (Designer) to M365 server results in authentication failure with a proxy server.
After trials and errors, we see that the authentication fails because the Proxy server for M365 does not accept HTTP request without "Host" header.
(please see attachment for details)

 

Alteryx Support says that Share Point Tools do not set Host header in HTTP request on any Designer versions,
We use Designer 2022.3.1.430 and SharePoint Tool 2.5.0.
Alteryx Support recreated the same issue on Designer 2024.1.1.93 and SharePoint Tool 2.6.3 for Designer 2024.
and they suggested me to submit an Idea on the community site as an enhancement request.
 
So I submit this as a new idea:
To enhance SharePoint Tool to send HTTP request with Host header.
(more preferably, to send HTTP 1.1 where Host header is mandatory.)

 

At the same time, I have questions to the community:
  • Have anyone experienced the same error?
  • If yes, is there any workaround to connect with M365?
I would like to avoid keep the current security setting as it takes time to change it.
 
Thank you in advance for your support !

Hello,

Here is the proposal about an issue that I face frequently at work.

 

Problem Statement -

Frequent failure of workflows that have either been scheduled or run manually on server because the excel input file is sometimes open by another user or someone forgot to close the file before going out of office or some other reason. 

 

Proposed Solution - 

The Input/Dynamic Input tools to have the ability to read excel files even when it is open so that the workflows do not fail which will have a huge impact in terms of time savings and will avoid regular monitoring of the scheduled workflows.

 

 

Whenever I overwrite an Excel sheet with data of the same format just different values (e.g. Q2 data versus Q1 data) all of my Pivot Tables break and I have to manually recreate them even though the schema didn't change.  Somehow the Table is being deleted/removed and replaced with a completely different Table which is what causes the Pivot Tables to break.  The only way to avoid this is to manually set the Cell Range, but who has time for that?  The only solution I have found is to manually copy all values and paste them over the existing data which is very inefficient the more sheets you are working with.

0 Likes

The management of connections, especially in a collaborative environment is not cohesive or intuitive.

 

  1. When configuring an input tool for a server / gallery connection, hunting for the correct connection in a long list is quite frustrating.  There is no search, no sort and the list of connections does not sort in any logical order by default.
    SUGGEST:
    • List of Server Connections is sorted Alphabetically by default
    • Give the ability to search and sort
  2. Add additional connection metadata
    • Primary Owner (add metadata element to Curator screen & surface in the connection list)
    • Secondary Owner (add metadata element to Curator screen & surface in the connection list)
    • Connection String (surface the server name and login name to the connections list, omitting the password)
  3. Rethink the concepts of RCM, Gallery Connections and the external file method of storing credentials for In-DB connections.
    • Make one overarching, cohesive method of storing and sharing credentials across the platform.
    • Enable Artisans to create
    • Enable Artisans to share
    • Retire concept of external files to store credentials, as is used with In-DB connections

Hello all,


As you all know, you can use API with the Alteryx Download tool. However, this tool is not that easy to configure.
On the other hand, the API world use a lot tools such as Postman  or Bruno (an open source clone) which allows easy test, debug... I use it everytime I had to work on a rest API and then I try to translate it to the final tool (such as the Alteryx Download tool). Both tools offer "collection", a set of request, and also environment configuration. Here are some examples on the project I'm working on :

 

image.png

 

image.png

image.png

And you can even get some code

image.png

I would like to leverage those collections in my download tool configuration, that would be quite easier to use !

Best regards,

Simon

Apologies if this has been suggested or exists. I often find myself using manual Excel files as a data source. These files frequently use cell formatting elements, such as cell color and text color, to convey important information. However, when these files are imported into Alteryx, this valuable formatting information is unfortunately lost.

 

To address this, a dedicated input tool that can read Excel files with separate fields for these formatting elements would be very helpful. This would be incredibly beneficial, especially when the data lacks other fields that relate to the coloring. Currently, I manage to achieve this using a Python tool, but integrating this as a built-in feature in Alteryx would undoubtedly be more efficient and user-friendly. This enhancement would not only simplify data preparation but also ensure the preservation of the full context of the original Excel file.

 

Hello all,

As of today, when you want to retrieve or create a file on Apache Spark for Databricks, you have only two choices :  CSV and Avro

 

 

image.png

However it's clearly missing parquet file type :
-it's faster
-it's better for storage
-it's standard and already supported as input/output of Alteryx or for HDFS so doesn't seem hard to add here.

Best regards,

Simon

Hello,

It's nice to have this OpenAI Connector but it seems it must be the default OpenAI URL. In my company, we use OpenAI on an Azure instance and I'm unable to connect to it.

(by the way, I know pre-sales teams have developed lot of connectors for fireworks, mistral, etc.. it would be very cool to have it available).

Best regards,

Simon

0 Likes

Hi all 

 

Currently when you set your workflow to don't write outputs (disable all tools that write output) under runtime of the configuration of workflow- the render and green output tools become greyed out and do not write an output (as expected). 

 

However, this is not the case for connectors - for example, if you use the SharePoint output tool and click disable all tools that write output, it will not be greyed out and still write an output. Is it possible for these connectors to also not run when this is selected in the configuration? As otherwise currently, you have to add it to a container and disable it.

Hi is it possible to add sheet names (to spreedsheet files) to the output of a file directory tool

0 Likes

I know y'all are working on data lineage for some future offering and it is very much needed. For highest quality results, please make logs a primary source of lineage information. Being able to use dynamic naming with some tools and macros makes the names in the workflows simple foobar placeholders and do not reflect what actually happened. Today Connect doesn't use logs and leaves many lineage gaps because of this

 

Please move this to a more appropriate category if needed. This future feature work is not part of Connect.

Hi all,

 

When preparing reports with formatting for my stakeholders. They want these sent straight to sharepoint and this can be achieved via onedrive shortcuts on a laptop. However when sending the workflow for full automation, the server's C drive is not setup with the appropriate shortcuts and it is not allowed  by our admin team.

 

So my request is to have the sharepoint output tool upgraded to push formatted files to sharepoint. 

 

Thank you!

Please update the Render tool to allow users to name the Excel sheet for the output. Alteryx currently errors when using same naming convention that works in normal Output tool.

Multi-Fill Tool

Please consider a new Multi-Fill tool, not for Apps, but for regular workflows, manually run or scheduled.

Similar to the Interface tool-combination of the Text Box & Action (Update value) tools, this Multi-Fill tool would enable the user to update, for example, the User Name and Password in one place for multiple Download tools. It could also be used to update other tool variables like Filter, Sort, Unique, etc.

0 Likes

It would be neat to add a feature to the Output tool to allow grouping by rows, with all the data related to the group column viewable under a drop-down of the selected field. 

 

I've heard that this is possible with a power pivot but would be a nice feature in Alteryx.

 

Ex. A listing of all customers in a specific city -> Group by the "Neighborhood" column, the output should be a list of all neighborhoods in the city, with an option to drop down on each neighborhood to see its residents and their relevant data. 

 

Thanks!

Problem statement - 

Currently we are storing our Alteryx data in .yxdb file format and whenever we want to fetch the data, the whole dataset first load into the memory and then we can able to apply filter tool afterwards to get the required subset of data from .yxdb which is completely waste of time and resources.

 

Solution - 

My idea is to introduce a YXDB SQL statement tool which can directly apply in a workflow to get the required dataset from .YXDB file, I hope this will reduce the overall runtime of workflow and user will get desired data in record time which improves the performance and reduce the memory consumption.

Our company has a need to link a new data source in Athena.  We have been able to establish a connection using the input functionality however the connection is so slow it is unusable.  We need to have Alteryx build an In Database option for Athena to allow us to link our data lake to Alteryx.  

Hey all,

 

At present, if you have an existing canvas and you want to move to a DCM Connection - you are asked something like "this will reset all of your connection details - are you sure".    If you have complex queries; or pre+post SQL - then you first have to copy all of this out into Notepad before you can convert to DCM and then reconfigure it all again.

 

However, if you are not using DCM you can change data sources when you go into Workflow Dependancies without losing your queries etc.

 

 

Capture.PNG

 

 

Could we revisit the user experience of changing to or from a DCM connection to eliminate this "start from scratch" phenomenon - if you are converging from an existing SQL ODBC or ODB or SSVB connection to a SQL connection via DCM then it should allow you to make this conversion without losing your current configuration; and the same for any other database type.

 

cc: @mbarone 

Hi all,

 

At present, Alteryx does not support DSN-free connections to Snowflake using the Bulk Connector.    This is a critical functionality for any large company that uses Alteryx - and so I'm hoping that this can be changed in the product in an upcoming release.    As a corollary - every DB connection type has to be able to work without DSNs for any medium or large size server instance - so it's worth extending this to check every DB connection type available in Alteryx.

 

Here are the details:

 

What is DSN-Free?

In order to be able to run our Alteryx canvasses on a multi-node server - we have to avoid using DSNs - so we generally expand connection strings that look like this:

odbc:DSN=DSNSnowFlakeTest;UID=Username;PWD=__EncPwd1__|||NEWTESTDB.PUBLIC.MYTESTTABLE

 

to instead have the fully described connection string like this:
odbc:DRIVER={SnowflakeDSIIDriver};UID=Username;pwd=__EncPwd1__;authenticator=Snowflake;WAREHOUSE=compute_wh;SERVER=xnb27844.us-east-1.snowflakecomputing.com;SCHEMA=PUBLIC;DATABASE=NewTestDB;Staging=local;Method=user

 

For Snowflake BL:

Now - for the Snowflake Bulk Loader the same process does not work and Alteryx gives the classic error below

 

With DSN:

snowbl:DSN=DSNSnowFlakeTest;UID=Username;pwd=__EncPwd1__;Staging=local;Method=user|||NEWTESTDB.PUBLIC.MYTESTTABLE

 

Without DSN:

snowbl:driver=SnowflakeDSIIDriver;UID=SeanBAdamsJPMC;pwd=__EncPwd1__;SERVER=xnb27844.us-east-1.snowflakecomputing.com;WAREHOUSE=compute_wh;SCHEMA=PUBLIC;DATABASE=NewTestDB;Staging=local;Method=user|||NEWTESTDB.PUBLIC.MYTESTTABLE

 

Output Data (6) Error SQLDriverConnect: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified" which indicates that the driver details may be wrong

 

 

Many thanks

Sean

I am a big user of the browse tool and the filter option within the browse tool. In many cases I filter on multiple columns at the same time as I'm sure many users do. I am suggesting the following 2 enhancements to filter functionality in the browse tool:

 

1. After applying some filters, although I can see the filter icon activate at the top of the tool, it is difficult to know at a glance which columns have filters applied without clicking on every column heading and examining the filter settings. In the event a column is filtered, a filter icon could be provided at the top of the column to easily identify filtered columns, removing the need for users to memorise filtered columns.

 

2. After applying multiple filters, if a user clicks onto another tool with the workflow or anywhere else on the canvas - even accidentally - all filters will be removed and the user will need to reapply them. In my view it would make more sense to make the filters persistent, or at least give users the option of doing so. Doing so would be a big time saver.

Top Liked Authors