The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Hi,

 

I wasted a good old chunk of time dealing with non-breaking spaces, and Alteryx could be improved by handling this automatically.

 

A space is a space, right?  Nope, there are spaces (ASCII value decimal 32) and there are non-breaking spaces (ASCII value decimal 160).  They look the same, but have slightly different behaviour in certain circumstances, like when text is auto-wrapped.

 

The DataCleansing tool cleans spaces, but leaves non-breaking spaces.

The Data Grid puts a warning on cells with leading or trailing spaces, but remains silent for non-breaking spaces.

 

I was trying to match two strings, that looked identical.  I had DataCleansed my cells, and the grid was showing me nothing wrong with the data.  In desperation, I copied the two data cells that I expected to match to a text editor (Textpad), and then examined the binary ASCII values of the data.  One cell had a trailing non-breaking space, and that caused the failure to match.

 

This was hard to find.  For someone less hopelessly nerdy, it would be practically impossible. 

 

As a small change, it might be really useful for Alteryx to include non-breaking spaces in it's definition of "space", such that DataCleansing tool removes it, and the Data Grid flags up the cell as having a leading or trailing space.

 

You could pick up non-breaking spaces from HTML, or from Excel.  I think mine came from a SQL script but I am not sure how it was there.  They are out there, and they will bite.

I recently did some extensive work on using the download tool to invoke Restful Web Services. A lot of the initial effort was around ensuring that the data being passed in the header and body for the request was as the service required. Following review of experiences on the community I used a tool called Fiddler to directly view what was being sent to identify the problems in my transformations of the data going into the Download tool. The idea is that the raw HTTP request and reply messages are available directly in Alteryx in the Results window when running a workflow, preventing the need to use another tool.

Hi, I've noted that there is not url-decode function in Alteryx. I guess I'm the first one to need that, so I'm posting this idea here. I think it would not be a big deal to do so if there's a url-encode function. 

 

Thanks. 

Hi all.

 

We're heavily using SalesForce and one of the reasons we've purchased so many licenses was support of SFDC.

However, our implementation is set up based on a Custom Domain.

Unfortunately at this point our connector does not support Custom Domains. 

Please, kindly review and possibly add SFDC authorization option that supports Custom Domain. Like Excel, Tableau and others does1.jpg

Hello,

I think it would be extremely useful to have a switch connector available in Alteryx. What I mean by a switch connector is a connecting line with an on/off state that will block the data stream through it when off. Something like below:


Switch Connector in an "Off" state

This would be extremely useful when you only want data to flow down some of the paths. In the example above, I might turn the switch connector to off because I want to see the Summarize results without outputting to a document.

The current methods for having a path/set of tools present but unused are insufficient for my needs. The two methods I and Alteryx support were able to find were:
1. Deleting the connecting line - This works, but throws up errors. Even though this is functional, it looks bad when I need to present my Alteryx module and there are errors.
2. Putting the tools in a disabled tool container - I cannot see the tools when the container is disabled. I want to be able to see my tool set-up even when I am not using it.

This is inspired by the use of switches in electrical circuit design, such as:

Please comment if you also think this would be useful, or if you have ideas for ways to improve it further. Thank you!

The Remove Null Rows feature added to the Data Cleansing tool is really nice, however it doesn't work for a common use case for us where we have key metadata field(s) added to the data stream that make rows not null so we'd like to be able to ignore or exclude one or more fields from the Remove Null Rows output.

 

Here's a use case starting with an Excel file with multiple tabs where each tab holds the records for a different Province:

 

Screen Shot 2020-06-17 at 9.39.17 AM.png

 

 

Note that the 2nd record in Southern is entirely empty, so this is the record that we'd like to remove using the Data Cleansing tool.

 

Since the Province name is only in the worksheet name (and not in the data) I'm using a Dynamic Input tool with the "Output File Name as Field" to include the worksheet name so I can parse it out later. So the output of the Dynamic Input looks like this:

 

Screen Shot 2020-06-17 at 9.46.34 AM.png

 

With the FileName field populated the entire row is not Null and therefore the Remove Null Rows feature of the Data Cleansing tool fails to remove that record:

 

Screen Shot 2020-06-17 at 9.48.24 AM.png

 

 

Therefore what we'd like is when we're using the Remove null rows feature in the Data Cleansing tool to be able to choose field(s) to ignore or exclude from that evaluation. For example in the above use case we might tick the "FileName" checkbox to exclude it and then that 2nd row in Southern would be removed from the data.

 

There are workarounds to use a series of other tools (for example multi-field formula + filter + select) to do this, so extending the Data Cleansing tool to support this feature is a nice to have.

 

I've attached the sample packaged workflow used to create this example.

Before Designer 2019.4 there was a "bug" in the workflow statistics collection that under the "SampleModule" data from the UsageGallery collection the name of the workflow run from within Designer was available.  We used that information to determine the common workflows run in our community as well as generating a measure of community growth.  The "bug" was removed in 2019.4 and now we can only determine the number of runs, but not the number of distinct workflows that were run.  This idea to do return the workflow name run to the information stored in the Mongo database.

 

daviskb_0-1592313145407.png

 

I know this has been suggested before, but it would be great if calculations and transformations could be cached between workflow executions.  Perhaps the browse tools could be configured as caches.  Any spot that has a browse tool fixes the value of that node between runs provided that there are no upstream tool changes. The cache could be optional (or flushed) to allow for dynamic input data that could change between executions, even if the tool chain didn't.

Right now the PublishToPowerBI connector only publishes to "My Workspace."  I manage datasets that feed reports for multiple workspaces, some of which are not necessarily personal workspaces (so there is no login associated). A drop-down that lets you select which workspace, that you are a member of, would be fantastic!

 

The workaround right now is to ETL in Alteryx then save the dataset out to OneDrive. You can then "Publish" the Excel sheet to Power BI natively, and the data refreshes once an hour. This works for some data, but we have use cases that need refresh rates much higher than that. Plus publishing directly to Power BI would be ideal.

For those of us obsession with the look and feel of our workflows, it would be great to have the option to modify the paths of tool connections, as shown in the image below. 

 

Alteryx Idea.png

 

 

I think it would be really good if we have the option to cache data for few days, as currently cached data gets deleted when you close the workflow.

 

It’s useful to catch data when developing reports with input data from data warehouses or big data platforms , as sometimes it can takes a while to extract the data.

 

If we have the option to cache data for few days or delete when it’s not required anymore, it can save a lot of time, the next time when you open the workflow to complete the development or make changes to your workflow.

Hi currently the s3 upload tool only allows file format of *.yxdb , *.json, *.csv and *.avro

 

In order to optimize loading to redshift, it would be good to have a few more functions

1. Ability to s3 upload with *.gz format

eg: Reading in a file using the input tool -> s3 upload tool (which has a gzip function with the following options - record limit, delimiter, UTF8)  

http://docs.aws.amazon.com/redshift/latest/dg/t_loading-gzip-compressed-data-files-from-S3.html

2. Change max record limit, delimiter, UTF8 format

3. Change the objectName to 'take file/table name from field' with filename containing filename or part of filename similar to the 'Output tool'

 

Adrian

 

At the moment one of the Union Tool errors reads: "The field "abc" is not present in all inputs".

 

It would be useful if the tool said "The field "abc" is not present in Input(s) #x,y..."

 

If there are a lot of inputs on the tool it can take a while to find which input is missing the field.

It would be great if there was a way for the Text to Columns tool did not drop the last empty when using Split to Rows.

 

For example, if I had the data:

RecordIDString
11,2,3
21,2,
31,,

 

Notice that each value has two commas (representing three values per cell), and If I configure to split into rows on the comma character, what would you expect the result to be:

 

Result A:

RecordIDString
11
12
13
21
22
31
3 


OR

 

Result B:

RecordIDString
11
12
13
21
22
2 
31
3 
3 


OR

 

Result C:

RecordIDString
11
12
13
21
22
31



I would expect Result C if I selected "Skip Empty Fileds", and that is what happens if I select that option.

 

But If I do not want to skip empty fields, I would expect Result B, but what I get is Result A where the last value/field is dropped/skipped.

 

What would it take to Result B as the output from the Text to Columns tool?

As a change to Designer UI in 2021.2, when in the filter box, I used to be able to use my mouse to click a little X in the corner to clear the filter or sort that had been implemented and it would immediately clear the filter.  It's not working as of 2021.2.  Now, I must navigate to the last cascade to get to the word Clear and click on it to clear the filter. 

This feels like another very tiny move in the wrong direction. These small UI changes cause 2 or 3 additional steps and slow the diagnostic/navigation process in moving around the Results Grid in the Browse Tool or at any point in the flow where the Results Grid is used.

Can the X in the top level of the Filter/Sort box in the Results Grid be restored in 2021.2?

 

Drvt6713_0-1626981721300.png

Related to submission:

Small fix for the UI in the Results Grid (or Browse Tool)

Small Keyboard fix for the browse tool's filter

Our company is still using 9.5 so if this is addressed in 10....I appologize.

 

Currently the Join Tool Options drop down has [Select-->Select All] and [Select-->Deselect All]. I think an additional [Select-->Select All Left] and [Select-->Select All Right] would be handy.

 

Thank You

Now that Alteryx releases updates to Designer every quarter I'll likely be updating my copy of Designer frequently. Meanwhile, my IT team doesn't want to have to update Server every quarter to stay compatible. Problem there is, when I create workflows in the latest version of Designer they can't run on the older version of Server, nor on the Gallery. 

 

Some features that would allow me to work around this: 

  1. If I could elect what version I want to use when uploading to the gallery. 
  2. If instead of having to upload workflows from within Designer (which thereby opens  the workflow in whatever version I have installed on my machine) I could upload workflows from the Gallery website by navigating to a folder on my directory and selecting a given workflow. That way I could open the workflow in Notepad beforehand and alter the version number to match Server. 

 

I'm guessing this is a niche problem that few others will encounter: 

  1. Not everyone is as big a nerd as me and will insist on updating Designer each quarter
  2. Other companies may have IT teams that update Server each quarter
  3. You can install an admin and non-admin version of Alteryx on your machine (I plan on doing this once IT responds to my internal service request).
    1. You could use the admin version for the latest and greatest version of Alteryx
    2. You could use the non-admin version to match whatever version of Server IT has installed and use that to upload (first opening the workflow in notepad to manually overwrite the version number to match server) 

Both Input and Output tools should have the ability to read or write any file type from/into standard compression types (ZIP and GZIP). This would be helpful when managing large files.

I'm just submitting @neilgallen's idea from here. The labels in the results window are still white, which is no longer visible. You can barely see that they're indeed still there when you hover over them.

 

Capture.PNG

We would like some enhancements to the Salesforce connectors (input and output) to allow:

- Either the Batch or the Bulk API to be used. Batch API is much better for smaller jobs while the Bulk is better for larger jobs (larger numbers of records). It would be very useful to allow the selection of which API was used by the tool to use the most efficient API.

- The number of records per batch to be defined in the tool. I know this can be achieved using a batch macro but it would be far easier (from a user point of view) to be able to enter this value in the Salesforce connector and have it manage the batch size. We frequently have issues with the batch size being too large and Salesforce having errors (and records not updating).

Top Liked Authors