The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Would it be possible to update the SalesForce input tool to support API version 49 or later.

Changes were made to the way recurring events are handled in the SalesForce lightning update and the current salesforce input connector does not include all events when extracting.

When inputting a CSV file via the Azure Data Lake File Storage tool the default behaviour is for the first row to be interpreted as data.

 

When reading the same file locally using the File Input tool the default behaviour is for the first row to be interpreted as headers.

 

Since the majority of files will include headers on the first row, it would be helpful to have the "First row contains field names" option selected by default in the Azure Data Lake File Storage tool, and this would also bring the defaults of this tool in line with the standard File Input tool.

 

Illustration below showing the issue:

jamielaird_0-1620852719680.png

 

Hello,

Regarding the Amazon S3 tools in Alteryx Designer, only 4 file formats are supported.

We would like to see also the following formats: .xls and .xlsx 

 

Regards.

There are a number of requests for bulk loaders to DBs and Im adding MySQL to the list.

 

Really every DB connection (on prem and cloud) need some bulk loader capabilities to be added (if they don't have it already)

I know that incoming and outgoing connections can be wired and wireless, and that they will highlight when one clicks on a tool. However, it would be very useful to be able to highlight a particular connector in a particular colour (selected from a palette, perhaps, from the drop down window, or from the configuration). This would be especially useful when there are many connectors originating from a single tool.

Thanks

I have had multiple instances of needing to parse a set of PDF files. While I realize that this has been discussed previously with workarounds here: https://community.alteryx.com/t5/Alteryx-Knowledge-Base/Can-Alteryx-Parse-A-Word-Doc-Or-PDF/ta-p/115...

having a native PDF input tool would help me significantly. I don't have admin rights to my computer (at work) so downloading a new app to then use the "Run Command" tool is inconvenient, requires approval from IT, etc. So, it would save me (and I'm sure others) time both from an Alteryx workflow standpoint each time I need it, but also from an initial use to get the PDFtoText program installed.

As we have the option to name the connector line in the Connection-Configuration, the option to color code those lines would be of great help. 

I noticed that Tableau has a new connector to Anaplan in the upcoming release. 

 

Does Alteryx have any plans to create an Anaplan connector? 

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Limited-access-to-the-table-contents-i...

 

Please follow above link...

 

We are seeing a huge requests from users to support this feature...

 

Till 2019.4 version, Alteryx can't connect to a table if complete read access is NOT granted to it.... 

Other DataConnector tools des this action except Alteryx...

 

Please consider this as a Feature Request ..

Hello Alteryx,

 

It seems that the Endpoint parameter for the Amazon S3 Upload tool only support "Path Like" URL. It would be great if the Endpoint parameter could also take into account "Virual Hosted" URL.

 

When we enter a "Virtual Hosted" URL, the "Bucket Name" and "Object Name" parameters don't respond correctly.

The three dots option for the "Bucket Name" parameter returns the bucket name and the object name at the same time. And the three dots option for the "Object Name" parameter doesn't suggest any object name.

We can enter those manually but we lose some of the Alteryx functionnality. 

 

It would be a great improvement that the Endpoint parameter takes into account "Virtual Hosted" URL so we keep "Bucket Name" and "Object Name" suggestions once the Endpoint is registered.

 

Is it in the roadmap?

 

François

 

When we try to call external web site from Alteryx Designer Download tool, our company proxy server failed the authentication because Alteryx uses the basic login/password authentication.  This has happened to multiple applications that need to interact with external partners.  Will like to request an enhancement to enable Alteryx to authenticate using Kerberos or NTLM.

As you may know, the interrogation of Hive to get the Metadata is actually very slow on Alteryx

 

A first step of improvement (at least in the Visual Query Builder) has been proposed here

Smartest VQB

 

But the real issue for Hive is that the way Alteryx queries the Metadata : it passes "Show table" queries for all the databases. On our cluster, it means more than 400 queries that last each avout 0.5 seconds. The user has to to wait about 4 minutes.

A solution : using an API in java to ask the Hive metastore if it exists (it may be an other tab in the In database configuration). Our cluster admin has an example of a Thrift API in java that we can give you.

Result : 2 seconds for a 38700 tables in more than 500 databases !!

Our company is implementing an Azure Data Lake and we have no way of connecting to it efficiently with Alteryx.  We would like to push data into the Azure Data Lake store and also pull it out with the connector.  Currently, there is not an out-of-the-box solution in Alteryx and it requires a lot of effort to push data to Azure.

Not sure if any of you have a similar issue - but we often end up bringing in some data (either from a website or a table) to profile it - and then an hour in, you realise that the data will probably take 6 weeks to completely ingest, but it's taken in enough rows already to give us a useful sense.

 

Right now, the only option is to stop (in which case all the profiling tools at the end of the flow will all give you nothing) and then restart with a row-limiter - or let it run to completion.   The tragedy of the first option is that you've already invested an hour or 2 in the data extract, but you cannot make use of this.

 

It feels like there's a third option - a option to "Stop bringing in new data - but just finish the data that you currently have", which terminates any input or download tools in their current state, and let's the remainder of the data flush through the full workflow.

 

Hopefully I'm not alone in this need 🙂

Pushing data to Salesforce from Oracle would bemuch easier if we were able to perform an UPSERT (Update if existing, Insert if not existing) function on any unique ID field in Salesforce. Instead of us having to do a filter to find the records that have or don't have an ID and run an Update or Insert based on the filter.

On shared collection , users have access to the collection shared by other  team members. When users copy the ‘Publish to Tableau Server ‘  tool from one workflow to another it copies with the credentials embedded in the tool as well. 

 

As user John Doe’s workflow publishes data on to tableau server with  Peter’s credentials as the publish to dashboard tool was copied from Peter’s workflow.

 

The concern really is  Users copying tools from one workflow can really copy the credentials as well. Enhancement to the publish to Tableau tool would be much appreciated. 

I recently began using the SharePoint Files v2.0.1 tools to read and write data. The SharePoint Files Output tool allows you to take a sheet or filename from a column but that column is still included in the output. The standard Output Data tool has a "Keep Field in Output" checkbox that allows you to control if the column stays in the XLSX of CSV file. It would be great if this same functionality could be included in the SharePoint Files Output tool.

Ouput Tool Checkbox.PNG

My company has recently purchased some Alteryx licences with the hope of advancing their Data Science  capability. The business is currently moving all their POS data from in-premise to cloud environment and have identified Azure Cosmos DB as a perfect enviornment to house the streaming data. Having purchased the Alteryx licences, we have now a challenge of not being able to connect to the Azure Cosmos DB environment and we would like Alteryx to consider speeding up the development of this process.

I would like to raise the idea of creating a feature that resolves the repetitive authentication problem between Alteryx and Snowflake

 

This is the same issue that was raised in the community forum on 11/6/18: https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Discussions/ODBC-Connection-with-ExternalB...

 

Can a feature be added to store the authentication during the session and eliminate the popup browser?  The proposed solution eliminates the prompt for credentials; however, it does not eliminate the browser pops up. For the Input/Output function, this opens four new browser windows, one for each time Alteryx tests the connection. 

As Tableau has continued to open more APIs with their product releases, it would be great if these could be exposed via Alteryx tools.

 

One specifically I think would make a great tool would be the Tableau Document API (link) which allows for things like:

 

- Getting connection information from data sources and workbooks (Server Name, Username, Database Name, Authentication Type, Connection Type)

- Updating connection information in workbooks and data sources (Server Name, Username, Database Name)

- Getting Field information from data sources and workbooks (Get all fields in a data source, Get all fields in use by certain sheets in a workbook)

 

For those of us that use Alteryx to automate much of our Tableau work, having an easy tool to read and write this info (instead of writing python script) would be beneficial.  

Top Liked Authors