Get Inspire insights from former attendees in our AMA discussion thread on Inspire Buzz. ACEs and other community members are on call all week to answer!
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Idea:

An Alteryx version for Mac OS X sounded like a nice idea... Although there are options for using bootcamp with windows 7-8

or some virtualisation software as mentioned in a community post here.

 

Rationale 1 (Competitors do it):

First of all there is no need to neglect a customer segment using Mac's.

 

  • Rapidminer Studio comes with a dedicated OS X version,
  • Knime has Mac OS X support 
  • Weka has Mac OS X support as well
  • SPSS Modeler is Windows only but SPSS Stats is Mac OS X compatible.

 

Seems SAS was compatable in the last decade, but they dropped it. Now SAS is not OS X compatible but

still with the "SAS OnDemand" version Mac users can easly get a hands on experience.

 

Rationale 2:

The Mac Pro Beast has 7.2 TFlops of computing power with the help of dual ATI graphics cards.

It would be awesome to install Alteryx on one... 

 

Please enhance the dynamic select to allow for dynamic change data type too.  The use case can be by formula or update in an action for a macro.  If you've ever wanted to mass change or take precision action in a macro, you're forced to use a multi-field formula.  It would be rather helpful and appreciated.

 

Cheers,

 

Mark

DELETE from Source_Data Where ID in

SELECT ID from My_Temp_Table where FLAG = 'Y'

 

.... 

 

Essentially, I want to update a DB table with either an update or with the deletion of rows.  I can't delete all of the data.  My work around will be to create/insert into a table the keys that i want to delete and try to use a input/output tool with SQL that performs the delete.  Any other suggestions are welcome, but a tool is best.

 

Thanks,

Mark

Every time I add a tool container I default the Margin to "none." Could you make a default selection part of user settings? Thank you.

Please add support for windows authentication to the download tool.  I know there's a workaround but that involves using curl and the run command tool.  The run command tool is awful and should be avoided at all costs, so please improve the download tool so I can use internal APIs.

It would be great if we could set the default size of the window presented to the user upon running an Analytic App. Better yet, the option to also have it be dynamically sized (auto-size to the number of input fields required).

In normal output tool, when file type is csv, it is possible to custom select the delimiter.  It would be great to be able to have the same option in the Azure Data Lake output tool, so for example you can write a pipe delimited file to your ADLS storage account.

When I proceed with this command in a python tool:

 

from ayx import Package

Package.installPackages(package='pandas',install_type='install --upgrade')

 

in Alteryx it only updates to 0.25, but the Latest version is 1.1.2.

 

When I would like to upgrade from the Python side i get the following:

ERROR: ayx 1.0.54 has requirement pandas<0.25.0,>=0.24.2, but you'll have pandas 1.1.2 which is incompatible.

 

Can you please make sure we can upgrade to the latest version of pandas without any compatibility issue?

 

This is important because of json_normalize. Really useful tool, available from pandas 1.0.3!

We need some way (unless one exists that I am unaware of - beyond disabling all but the Container I want to run) to fire off containers in particular order.  Run Container "Step1" then Run Container "Step2" and so on.

I would to suggest to add a configuration in the Block Until Done tool, which allow the user to prioritize the release of a data stream through multiple Block Until Done tools in the same module. 

 

In the example below, the objective is to update multiple sheets in a single Excel workbook. Each sheet is a different data stream, that cannot be unioned together, therefore making the filtering of a single stream feeding into multiple Block Until Done from that filter solution impossible.

 

What I would like to be able to do is have a configuration, where Block Until Done #2 will not allow the data stream to pass through until Block Until Done #1 is complete, Then Block Until Done #3 will not pass through the data stream until Block Until Done #2 is complete, and so forth through the all the Block Until Done instances. 

 

ScubaGeek_0-1654554889263.png

 

Hello Dev Gurus - 

 

The message tool is nice, but anything you want to learn about what is happening is problematic because the messages you are writing to try to understand your workflow are lost in a sea of other messages.  This is especially problematic when you are trying to understand what is happening within a macro and you enable 'show all macro messages' in the runtime options.  

 

That being said, what would really help is for messages created with the message tool to have a tag as a user created message.  Then, at message evaluation time, you get all errors / all conversion warnings / all warnings / all user defined messages.  In this way, when you write an iterative macro and are giving yourself the state of the data on a run by run basis, you can just goto a panel that shows you just your messages, and not the entire syslog which is like drinking out of a fire hose. 

 

Thank you for attending my ted talk regarding Message Tool Improvements.

 

 

When working with APIs it is quite common to use the JSON parse tool to parse out the download data which has been returned from the API. However the JSON data may be missing key:value pairs as they are not in the response. This causes issues with downstream tools where there are missing fields. The current workaround for this is to use either the Crew macro Ensure fields, or union on a text input file to force the missing fields downstream.

 

The issue with this is:

1) Users may not be aware of the requirement to ensure fields are present

2) You need to know the names of all the fields to include in the ensure fields macro

 

Therefore the feature request is to add an option to the JSON parse tool to add the model schema as an input.

 

For example with the UK companies house API, to get a list of all the directors at a company the model schema is

 

 

{
    "active_count": "integer",
    "etag": "string",
    "items": [
        {
            "address": {
                "address_line_1": "string",
                "address_line_2": "string",
                "care_of": "string",
                "country": "string",
                "locality": "string",
                "po_box": "string",
                "postal_code": "string",
                "premises": "string",
                "region": "string"
            },
            "appointed_on": "date",
            "country_of_residence": "string",
            "date_of_birth": {
                "day": "integer",
                "month": "integer",
                "year": "integer"
            },
            "former_names": [
                {
                    "forenames": "string",
                    "surname": "string"
                }
            ],
            "identification": {
                "identification_type": "string",
                "legal_authority": "string",
                "legal_form": "string",
                "place_registered": "string",
                "registration_number": "string"
            },
            "links": {
                "officer": {
                    "appointments": "string"
                },
                "self": "string"
            },
            "name": "string",
            "nationality": "string",
            "occupation": "string",
            "officer_role": "string",
            "resigned_on": "date"
        }
    ],
    "items_per_page": "integer",
    "kind": "string",
    "links": {
        "self": "string"
    },
    "resigned_count": "integer",
    "start_index": "integer",
    "total_results": "integer"
}

 

 

But fields such as "resigned_on" are not always present in the data if there are no directors who have resigned. Therefore to avoid a user missing the requirement for unidentified fields needing to be added, if there was an optional input which took the model schema and therefore created the missing fields would greatly improve the API development process and minimise future errors being encountered once a workflow is in production.

How about a quick method of disabling a container.

 

Current state - Click on the container, pan the mouse all the way over to the tiny checkbox target in the configuration pane and click disable.

Future state - little icon by the rollup icon that can be clicked to disable/enable, differentiated by perhaps a color change of the minimized pane perhaps?

 

I know what you're thinking, "talk about lazy, he's whining about moving the mouse (which his hand was already on) 2 cm along his desktop and clicking"... but still what an easy usability win and one less click to do a task I find myself repeating frequently.

With the new intelligence suite there is a much higher use of blob files and we would like to be able to input them as a regular input instead of having to use non- standard tools like Image, report text or a combination of directory/blob or input/download to pull in images, etc. I would like to see the standard input tool capable of bringing in blob files as well.

Blob InputBlob InputImage InputImage InputText InputText Input

The R tool has AlteryxProgress() and AlteryxMessage() functions for generating notifications in the Results window https://help.alteryx.com/current/designer/r-tool, however the Python tool does not. Since I'm writing more Python code than R code I'd like to have similar functionality available in the Python tool, e.g. an Alteryx.Progress() function and an Alteryx.Message() function.

 

Jonathan

 

A cahce tool would allow a user to temporarily store a snapshot of inline data from previous run of the module.

Imagine a browse tool that was inline as opposed to a terminus tool (input and output). Now allow that browse tool to persist its data after a run of the module. When an option on that tool was activated, it would block all of the dependent tools upstream from it and instead send its cached data downstream.

The reason I think this would be a useful tool is that I often come to the end of creating a module when I'm working on the Reporting tools. I run multiple times to see the changes I've made. When the module has a lot of incoming data and complex data transformations, it can take a long time just to get to the point where the data gets to the reporting tools. This cache tool would eliminate that wait.

When we try to call external web site from Alteryx Designer Download tool, our company proxy server failed the authentication because Alteryx uses the basic login/password authentication.  This has happened to multiple applications that need to interact with external partners.  Will like to request an enhancement to enable Alteryx to authenticate using Kerberos or NTLM.

Within the Dynamic rename tool there is an option to ignore missing fields. 

 

It would be great if this was a bit more "Dynamic", for example if you wish to ignore duplicate field names for example. 

 

Otherwise you are left with warnings in a perfectly functioning workflow which some users may wish to control. 

Hi Team,  

 

I am starting to utilize alteryx as our platform to run our daily data load process.  A bunch of the data are sent via SFTP and it would be a lot simpler if the following features are directly available instead of utilizing the run command and/or scripting.

 

1. Ability to use wildcard in the download instead of specifically defining the filenames. (can be done indirectly but have to use multiple tools --> 2 downloads, parse, etc.)

 

2. Ability to delete files after the files have been downloaded successfully.

 

 - I have seen a couple of posts in the community trying to do this but haven't found one that worked for me; again this was utilizing run command and scripts and other utility programs)

 

Appreciate if you can look into this request !

The Idea behind the Password Masking is - we have "Download Tool" from the "Developer Tab" - which is used to Download files from the given site. For example, let's take Mainframe. I have a scenario where the Alteryx Workflow should connect to the Mainframe FTP Server, download the required file which is used for downstream transformation. For the download, I get the Username and Password information from the Database table (to reduce manual intervention and prevent errors). While passing the Username and Password as a parameter to the Download Tool Macro (Custom Macro - accepts the Username/Password, Filename dynamically) - the Alteryx Workflow will obviously show the username and password in the result window (as it is considered as an output data from Input Tool). Now I want that particular password field to be masked, so whenever the particular Workflow is shared to the User - the password field remains unexposed. I know there's a way to mask a particular field using "MD5 HASH" formula, but that helps to mask anything related to Dataset and not a password (as it may consider it as a new string and not a valid password). This feature would be really beneficial to Developers who use the download tool often. A New Tool or a Custom Macro - embedding this feature would be great for users who needs Masking functionality.

Top Liked Authors