Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Every time I add a tool container I default the Margin to "none." Could you make a default selection part of user settings? Thank you.

Please add support for windows authentication to the download tool.  I know there's a workaround but that involves using curl and the run command tool.  The run command tool is awful and should be avoided at all costs, so please improve the download tool so I can use internal APIs.

Please enhance the dynamic select to allow for dynamic change data type too.  The use case can be by formula or update in an action for a macro.  If you've ever wanted to mass change or take precision action in a macro, you're forced to use a multi-field formula.  It would be rather helpful and appreciated.

 

Cheers,

 

Mark

We need some way (unless one exists that I am unaware of - beyond disabling all but the Container I want to run) to fire off containers in particular order.  Run Container "Step1" then Run Container "Step2" and so on.

The Idea behind the Password Masking is - we have "Download Tool" from the "Developer Tab" - which is used to Download files from the given site. For example, let's take Mainframe. I have a scenario where the Alteryx Workflow should connect to the Mainframe FTP Server, download the required file which is used for downstream transformation. For the download, I get the Username and Password information from the Database table (to reduce manual intervention and prevent errors). While passing the Username and Password as a parameter to the Download Tool Macro (Custom Macro - accepts the Username/Password, Filename dynamically) - the Alteryx Workflow will obviously show the username and password in the result window (as it is considered as an output data from Input Tool). Now I want that particular password field to be masked, so whenever the particular Workflow is shared to the User - the password field remains unexposed. I know there's a way to mask a particular field using "MD5 HASH" formula, but that helps to mask anything related to Dataset and not a password (as it may consider it as a new string and not a valid password). This feature would be really beneficial to Developers who use the download tool often. A New Tool or a Custom Macro - embedding this feature would be great for users who needs Masking functionality.

How about a quick method of disabling a container.

 

Current state - Click on the container, pan the mouse all the way over to the tiny checkbox target in the configuration pane and click disable.

Future state - little icon by the rollup icon that can be clicked to disable/enable, differentiated by perhaps a color change of the minimized pane perhaps?

 

I know what you're thinking, "talk about lazy, he's whining about moving the mouse (which his hand was already on) 2 cm along his desktop and clicking"... but still what an easy usability win and one less click to do a task I find myself repeating frequently.

A cahce tool would allow a user to temporarily store a snapshot of inline data from previous run of the module.

Imagine a browse tool that was inline as opposed to a terminus tool (input and output). Now allow that browse tool to persist its data after a run of the module. When an option on that tool was activated, it would block all of the dependent tools upstream from it and instead send its cached data downstream.

The reason I think this would be a useful tool is that I often come to the end of creating a module when I'm working on the Reporting tools. I run multiple times to see the changes I've made. When the module has a lot of incoming data and complex data transformations, it can take a long time just to get to the point where the data gets to the reporting tools. This cache tool would eliminate that wait.

When I proceed with this command in a python tool:

 

from ayx import Package

Package.installPackages(package='pandas',install_type='install --upgrade')

 

in Alteryx it only updates to 0.25, but the Latest version is 1.1.2.

 

When I would like to upgrade from the Python side i get the following:

ERROR: ayx 1.0.54 has requirement pandas<0.25.0,>=0.24.2, but you'll have pandas 1.1.2 which is incompatible.

 

Can you please make sure we can upgrade to the latest version of pandas without any compatibility issue?

 

This is important because of json_normalize. Really useful tool, available from pandas 1.0.3!

The R tool has AlteryxProgress() and AlteryxMessage() functions for generating notifications in the Results window https://help.alteryx.com/current/designer/r-tool, however the Python tool does not. Since I'm writing more Python code than R code I'd like to have similar functionality available in the Python tool, e.g. an Alteryx.Progress() function and an Alteryx.Message() function.

 

Jonathan

 

I would to suggest to add a configuration in the Block Until Done tool, which allow the user to prioritize the release of a data stream through multiple Block Until Done tools in the same module. 

 

In the example below, the objective is to update multiple sheets in a single Excel workbook. Each sheet is a different data stream, that cannot be unioned together, therefore making the filtering of a single stream feeding into multiple Block Until Done from that filter solution impossible.

 

What I would like to be able to do is have a configuration, where Block Until Done #2 will not allow the data stream to pass through until Block Until Done #1 is complete, Then Block Until Done #3 will not pass through the data stream until Block Until Done #2 is complete, and so forth through the all the Block Until Done instances. 

 

ScubaGeek_0-1654554889263.png

 

Hello Dev Gurus - 

 

The message tool is nice, but anything you want to learn about what is happening is problematic because the messages you are writing to try to understand your workflow are lost in a sea of other messages.  This is especially problematic when you are trying to understand what is happening within a macro and you enable 'show all macro messages' in the runtime options.  

 

That being said, what would really help is for messages created with the message tool to have a tag as a user created message.  Then, at message evaluation time, you get all errors / all conversion warnings / all warnings / all user defined messages.  In this way, when you write an iterative macro and are giving yourself the state of the data on a run by run basis, you can just goto a panel that shows you just your messages, and not the entire syslog which is like drinking out of a fire hose. 

 

Thank you for attending my ted talk regarding Message Tool Improvements.

 

 

Hello,

 

I would like to suggest the ability to manage our virtual environment for Python modules within Alteryx. Some current workflows I am building would be far easier and more secure if I had access to the virtual environment that the Python code would run in.

 

Uses for modifying the virtual environment:

     1) Setting environment variables in order to hide API Keys/DB credentials/etc.

     2) Installing private GitHub repository packages into the environment.

     3) Creating repeatable and easily maintainable ways to manage dependencies. 

 

It would be important that these virtual environments have a way to persist onto Alteryx Gallery, so that workflows behave identically on local machines as they would on the server. This could potentially be done though a requirements.txt file or some other environment initializer, but I'll leave the implementation to the experts. My preference would be for each workflow to contain their own virtual environment (as is best practice when developing Python scripts). 

 

Thank you,

In normal output tool, when file type is csv, it is possible to custom select the delimiter.  It would be great to be able to have the same option in the Azure Data Lake output tool, so for example you can write a pipe delimited file to your ADLS storage account.

In the dynamic input tool,

Where you “Read a List of Data Sources”, there should be a radio button below the “Action” field, to   

 

“INCLUDE FIELD OF DATA SOURCES”,

 

Then you’d have an output field with the isolated name from which the data was sourced. You wouldn't be required to "include full file path" then parse out the sheet the data came from. 

With the new intelligence suite there is a much higher use of blob files and we would like to be able to input them as a regular input instead of having to use non- standard tools like Image, report text or a combination of directory/blob or input/download to pull in images, etc. I would like to see the standard input tool capable of bringing in blob files as well.

Blob InputBlob InputImage InputImage InputText InputText Input

When switching modes sometimes it reboots and looses all the code:

Python Tool Bug.gif

Within the Dynamic rename tool there is an option to ignore missing fields. 

 

It would be great if this was a bit more "Dynamic", for example if you wish to ignore duplicate field names for example. 

 

Otherwise you are left with warnings in a perfectly functioning workflow which some users may wish to control. 

When working with APIs it is quite common to use the JSON parse tool to parse out the download data which has been returned from the API. However the JSON data may be missing key:value pairs as they are not in the response. This causes issues with downstream tools where there are missing fields. The current workaround for this is to use either the Crew macro Ensure fields, or union on a text input file to force the missing fields downstream.

 

The issue with this is:

1) Users may not be aware of the requirement to ensure fields are present

2) You need to know the names of all the fields to include in the ensure fields macro

 

Therefore the feature request is to add an option to the JSON parse tool to add the model schema as an input.

 

For example with the UK companies house API, to get a list of all the directors at a company the model schema is

 

 

{
    "active_count": "integer",
    "etag": "string",
    "items": [
        {
            "address": {
                "address_line_1": "string",
                "address_line_2": "string",
                "care_of": "string",
                "country": "string",
                "locality": "string",
                "po_box": "string",
                "postal_code": "string",
                "premises": "string",
                "region": "string"
            },
            "appointed_on": "date",
            "country_of_residence": "string",
            "date_of_birth": {
                "day": "integer",
                "month": "integer",
                "year": "integer"
            },
            "former_names": [
                {
                    "forenames": "string",
                    "surname": "string"
                }
            ],
            "identification": {
                "identification_type": "string",
                "legal_authority": "string",
                "legal_form": "string",
                "place_registered": "string",
                "registration_number": "string"
            },
            "links": {
                "officer": {
                    "appointments": "string"
                },
                "self": "string"
            },
            "name": "string",
            "nationality": "string",
            "occupation": "string",
            "officer_role": "string",
            "resigned_on": "date"
        }
    ],
    "items_per_page": "integer",
    "kind": "string",
    "links": {
        "self": "string"
    },
    "resigned_count": "integer",
    "start_index": "integer",
    "total_results": "integer"
}

 

 

But fields such as "resigned_on" are not always present in the data if there are no directors who have resigned. Therefore to avoid a user missing the requirement for unidentified fields needing to be added, if there was an optional input which took the model schema and therefore created the missing fields would greatly improve the API development process and minimise future errors being encountered once a workflow is in production.

When we try to call external web site from Alteryx Designer Download tool, our company proxy server failed the authentication because Alteryx uses the basic login/password authentication.  This has happened to multiple applications that need to interact with external partners.  Will like to request an enhancement to enable Alteryx to authenticate using Kerberos or NTLM.

I love the dynamic rename tool because quite often my headers are in the first row of data in a text file (or sometimes, Excel!).

 

However, whenever I open a workflow, I have to run the workflow first in order to make the rest of the workflow aware of the field names that I've mapped in the dynamic rename tool, and to clear out missing fields from downstream tools. When a workflow takes a while to run, this is a cumbersome step.

 

Alteryx Designer should be aware of the field names downstream from the dynamic rename tool, and make them available in the workflow for use downstream as soon as they are added (or when the workflow is initially opened without having been run first).

Top Liked Authors