Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.
Community is experiencing an influx of spam. As we work toward a solution, please use the 'Notify Moderator' option on the ellipsis menu to flag inappropriate posts.
Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

(1) I would like to have more text formatting options available in the Comment Tool, such as:

  • set different format for selected words (color, bold, underline, size..)
  • indenting
  • bullet list or numbered list

(2) Option to remove or recolor the blue outline of the comment box. (Especially when I have a comment in a color-filled comment box, I would prefer a comment box without a dark outline.) 

 

(3) UX -  Add an arrow cursor to indicate resizing functionality

I know it sounds trivial, but I hate having to do the extra click to get the browse tool to pop out.  Just upgraded from 2020.2 to 2021.3.  Before, you could pop out a browse window in 2 clicks:

2022-03-16 07_31_50-Alteryx Designer x64 - New Workflow1_.png

 

Now you need 3 clicks:

2022-03-16 07_31_37-Clipboard.png

 

Like I said, I know it sounds trivial, but when you do this dozens of times a day, it adds up to a big annoyance. 

 

Anyway, was just wondering if enough others felt the same and if so, hopefully the browser behavior could be pushed back to a 2 click pop out.  

 

Although I must say that I just LOVE the comma inserter.

Could you expose a link to the Keyboard Shortcuts (which is here: https://help.alteryx.com/2019.4/HotKeys_Shortcuts.htm?Highlight=keyboard%20shortcuts) on the primary help menu (screenshot below)

 

This will allow people to get quicker in Alteryx by exposing these shortcuts to more users.

 

Annotation 2019-12-26 095035.png

In short:
Add an option to cache the metadata for a particular tool so that it doesn't forget when using tool that have dynamic metadata such as batch macros or alteryx metadata engine can't resolve such as python tool.

 

 

Longer explanation:

The Problem:

One of the issues I often encounter when making dynamic workflows or ones that require calling external services is that Alteryx often forgets the metadata of what columns to expect. This causes the workflow to forget configuration of downstream tools when a workflow is first opened or when the metadata engine refreshes. There is currently the option to disable the metadata engine from automatically refreshing but this isn't a good option because you miss out on much of the value it brings.

 

Some of the common tools where I encounter this issue:

  • Json parse
  • Batch macros
  • Python tool
  • Regex parsing to rows

 

Solution:

Instead could we add an option to cache the metadata for a particular tool, this would save the metadata from the last time the workflow ran to within the workflows XML so that it persists when closed and reopened. Then when the metadata engine runs when it gets to this tool instead of resolving the metadata from the tool it instead uses the saved version in the XML. Obviously when it actually runs it would ignore this and any errors would still occur.

 

This could be an option in navigation pane of each tool. Mockup below:

Mockup.png

 

 

 

This would make developing dynamic workflows far easier and resolve issues of configuration being lost when the metadata changes and alteryx forgets the options.

Hello all,

 

A few weeks ago Alteryx announced inDB support for GBQ. This is an awesome idea, however to make it run, you should use Oauth2 Authentication means GBQ API should be enabled. As of now, it is possible to use Simba ODBC to connect GBQ. My idea is to enhance the connection/authentication method as we have today with Simba ODBC for Google BigQuery and support inDB. It is not easy to implement by IT considering big organizations, number of GBQ projects and to enable API for each application. By enhancing the functionality with ODBC, this will be an awesome solution.

 

Thank you for voting

Albert

Currently the InDB tools require to select a DSN that is defined on your computer.

 

This makes any workflow which uses a DSN incredibly painful to deploy to the server, since the DSN needs to be created on every worker node and for a large server environment this can  mean creating DSN entries on 6+ worker nodes in prod plus prod server plus dev/UAT environment plus dev/UAT worker nodes.

 

Could we please change the InDB tools to default to DSN-less connections, where the connection persists the connection details in-line so that it can deploy to the server without a DSN setup (since all connection details are contained within the connection string)?

 

Thank you

Sean

CC: @rijuthav @jithinmony @HengHe @RajK @ydmuley @revathi @Deeksha @MPistone @Ari_Fuller @Arianna_Fuller @JoshKushner @samnelson @avinashbonu @Sunder_Sriram @Rahul_Thakur @Rahul_Singh

Hi all,

 

The Publish to Tableau Server tool is great.. but requires username and password. If you are using AD, there is a chance that your users don't have a password. In that case, you probably have a technical user that you share across the team. This is not an ideal situation and you loose the governance around the data. 

 

Fortunately, there is an easy workaround. You can leverage personal token authentication : https://help.tableau.com/v2019.4/server/en-us/security_personal_access_tokens.htm 

 

The advantage of this method is that it logs in with your user and your data source is uploaded under your name. This is still using the Tableau REST API so the changes to do in the current macro is MINOR. 

 

Changes to do in the current macro : 

 

1- Add a parameter authentication method with choices : Username/Password ; Personal Token 

2- If Personal Token is selected, add two parameters : Token_Name and Token_Value

3 - In the TableauServer.Login supporting macro, improve the formula(13) to change the payload based on user selection. If Username/Password, keep it as is. Else use the syntax here : https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_concepts_auth.htm#make-a-sign-in-r... 

 

 

This is quite a straight forward change but could help a lot of companies using Alteryx.

Can you please implement that changes to strengthen this tool ? 

 

Thanks a lot,

When I import an Excel file in to Alteryx I get an error: “shared strings root=x:sst” and Alteryx cannot read the file.

I can work around this by manually opening and saving the excel before importing it into Alteryx but this is not ideal, especially considering the automation implications.

I believe this may be happening because the XLSX generated by the source of the report has a prefix “x:” in all the tags in the Shared String XML embedded in Excel. See: https://learn.microsoft.com/en-us/office/open-xml/working-with-the-shared-string-table 

Essentially, it would appear Alteryx is not able to read generated Excel sheets which has the prefix "x:" (e.g. from a bot). The second file which has been opened and saved in Excel manually can be read by Alteryx correctly.

 

Example of file as exported from ”BOT”:

PDempsey_0-1685619453716.jpeg

 

How the same file looks once it is manually opened and saved:

PDempsey_1-1685619453721.jpeg

Ideally Alteryx would read the file as is, i.e. with the "X:SST" tag seen above as having to manually open and save the excel before it can be read is rather clunky.

 

Thanks!

When using an OUTPUT tool, you can currently only output to one (1) format.  My idea is to allow for a checkbox to create a YXDB file format when you output to another format.  In many instances a copy of the CSV data in YXDB file format is needed.  Creating another output requires another tool with nearly identical information.  This is my backup copy to what I sent the customer is an example.

 

Cheers,

 

Mark

Taking inspiration how you work with Jupyter notebooks and use the notebook to show your workings,  wouldn't it be great if you could document your workflows directly on the canvas more in a notebook style.

I think this essentially can be summarised down into two features:

1) Markdown functionality in the comment tool

2) Ability to import results from IRG and  / or browse tool directly into the canvas.

 

I have mocked up a version of what this could look like in the screenshot below.

cgoodman3_0-1637103136947.png

 

Overall I think it would improve the experience of documenting workflows as you can show your workings in-line while building the workflow. Plus it solves the debate around team vertical vs team horizontal as you build using both! 

 

It would be very useful to be able to search the field by typing the name instead of scrolling up and down looking for it among a few hundred fields.

 

 Select Seach as you type.jpg

In a controlled environment, there is a need to control promotion of assets to prod with basic controls to ensure that someone has tested, signed off, that it meets certain quality standards (like "no warnings", "no Run Commands", "all reports must have company logo on top left" etc).

 

However, at the moment there doesn't appear to be a promotion process in Alteryx to control this flow, so assets are copied across by an admin.   This is very manual and error prone (many times we've had the wrong assets copied), and it also means that this process is controlled in a workflow outside of Alteryx (e.g. A JIRA queue or similar).

 

Could we request that Alteryx look into a production promotion process, which allows the admin team to perform any required checks (including automated checks), and then pushes this into prod stamped with the designer's Kerberos rather than the sys admin?

 

Joint idea with @avinashbonu @DamianA  @BenBu

 

 

Have you ever had the business deliver an Excel (EEK!) file to be passed into Alteryx with a different number of header rows (because it looks pretty and is convenient)? Never, you say? Lies! 

 

I would suggest adding an option to the Input Data Tool that would give us the ability concatenate multiple header rows. This would help enable accurate data profiling for columns when output and eliminate loss from unnecessary conversion errors. Currently, the options allow us to Start Data Input on Line X; however, if the header for the column is on multiple rows, they would have to be manually entered after input due to only being able to select the lowest possible row to assure the data is accurately passed. The solution would be to be able to specify the number of rows that contain headers, concatenate them to a single row (ignoring null and carriage return) and then output that as the header. 

 

The current functionality, in a situation where each row has a variable number of header rows, causes forced errors such as a scientific string conversion of a numeric value.

The more python and R development I do the more I want to use the shortcutes [CTRL] + [ENTER] to run my workflow,

 

Is it possible to add this as a second way to run the workflow? 

 

I'm thinking its going to have to have a new shortcut anyways with cloud as [CTRL] + [R] would refresh the page! :D

 

Asking for a friend :D

There are many circumstances when you have to build an interative macro where it's not just the iterating data set that needs to change every iteration, but also a second data set.

Think about this like a loop where two different variables are updated on every iteration, not just the control variable in the For xxx control variable.

 

The way that users work around this is to use a temporary yxdb file where instead of a macro input you input from the yxdb, and then write back to the same yxdb.    This allows you to pretend that you can adjust 2 different data sets on every cycle of the loop.    there are 4 downsides to this:
a) User complexity - this breaks the conceptual simplicity of macro inputs since now the users have to understand that in situation X I use macro inputs; and in situation Y I have to use some other type of tool.

b) Speed penalty - writing to disk is between 1000x slower and 1 000 000x slower than working with data in memory (especially if it's in cache) - so by forcing this to go through a yxdb file, you do incur a speed penalty which is just not needed

c) blocking penalty - Because of the fact that you can't write to a file that you're still reading from, you need to pepper this with Block until done tools - and you need to initialize the macro using a first write to the yxdb file outside the macro - which further hurts speed.    Given the nuanced behaviour of block-until-done, this also introduces user complexity issues

d) Self-contained - because you have to initialize these files outside the macro - the macro is no-longer self-contained and portable (which breaks the principle of Information Hiding which is a key pillar of good modular decoupled software design.

 

The other way that users work around this - is to serialize their entire second recordset into a field which then gets tacked onto the iterating data set using an iterative macro.   This is HIGHLY wasteful becuase then you have to build a serialize & deserialize process for this second recordset.    It fixes the speed and blocking penalites from above, but introduces a computational overhead which is generating no value; and makes this even more complex for users - and a further blocker to using macros.

 

 

Recommendation:

We could make this simpler by allowing users to create multiple pairs of macro input / macro outputs so that 2 or 3 or n different data sets can be updated with every iteration.

 

 

Below is a screenshot demonstrating this, from an Advent of code challenge - the details of the problem are not important - the issue at hand is that there are 2 record sets which both need to be updated on every iteration.

 

cc: @NicoleJohnson @Samanthaj_hughes @SteveA 

 

SeanAdams_0-1641333684845.png

 

I think the undo/redo capabilities in Alteryx could be greatly improved. Here is an idea that I think would be beneficial... 

 

I'd like to see which exact tools are affected by my undo/redo actions. An idea was suggested a couple years ago to move your location on the canvas, but that was not added to the roadmap. Instead, is it possible to add the tool ID to the undo menu so that it is obvious which tool each line is detailing?

 

This is the current debug menu that shows your previous actions:

Kenda_3-1672843121269.png

 

When a tool is created, the ID can be displayed in this menu, but this is not shown when a change is made to an existing tool. My suggestion is that the menu would say:

4. Change Sort (3) Properties

 

This same change should be made in the Edit dropdown menu.

Kenda_4-1672843269298.png

 

 

 

Hi Maveryx Community,
 
Wouldn't it be great to introduce a new feature that allows running another analytic app on error? I sometimes encounter situations where the workflow has two standard inputs, and the user has to pick the correct one, otherwise the process breaks. And I don't only mean two files with different formats, I mean files with same extension, but different structure (for example: one file has a given column, the other one does not, but they are both standard inputs). What if we had the possibility to run another app if the first one errored? This way, we could include conditional logic with test tools or message tools, and based on the file format, for example, error the workflow on purpose so that the second app gets summoned. If there is no error in the first part, it means the input was fine, and we don't need to call the second app that handles alternate inputs. I know situations like this can usually be solved by some workaround, but having this feature would make some workflows very clever.
 
Best Regards,
Szymon Czuszek

Would be nice to have a way to cache-uncache all inputs or a selected group of tools.  Caching and Uncaching workflows with many input tools or slow data-read tools gets to be a bit cumbersome.  Would be a nice QoL improvement :)

 

I looked around for something like this but didn't see a solution, so thought I'd recommend.  Please let me know if something like this exists already natively in designer desktop.

Hi,

 

It would be great if users have the option to display the number of records that go in and out of the different tools in your canvas. This allows users to very quickly see how many records are in their datasets, and especially quickly analyze the results of specific actions such as joins, filters etc. without the need to open each individual tool. Especially when performing joins this can be very useful to quickly see how many of your rows have been successfully joined. I think this will give users a feeling that they have more control over their data and a better understanding of what is happening in Alteryx. Also if you quickly want to review a complex workflow (especially when it is not your own) this could be a huge timesaver. Simply run the workflow and follow the numbers to see what is happening and identify tools that might cause issues.

 

Love to hear what you think!

 

Schermafbeelding 2021-04-08 om 10.50.36.png

 

 

 

Currently if one wants to compare different alteryx files or different versions of the same file - one needs to compare the XML files. If you are not very familiar with navigating XML, this poses a risk as one may not be able to identify all changes.

 

It would be a great addition to Alteryx to integrate Alteryx with Git, Subversion, CVS, Mercurial, and GitHub as this tool is becoming the go-to tool for data processing for data analysts and even programmers.

 

This additional functionality to compare previous versions (diff) and also to merge alteryx workflows if two people are working on the same workflow, and also to easily see what changes have been committed/ made by other developers and when would make Alteryx a much more powerful tool and would open doors to other types of users, as essentially you can run anything through Alteryx.