The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Idea:

An Alteryx version for Mac OS X sounded like a nice idea... Although there are options for using bootcamp with windows 7-8

or some virtualisation software as mentioned in a community post here.

 

Rationale 1 (Competitors do it):

First of all there is no need to neglect a customer segment using Mac's.

 

  • Rapidminer Studio comes with a dedicated OS X version,
  • Knime has Mac OS X support 
  • Weka has Mac OS X support as well
  • SPSS Modeler is Windows only but SPSS Stats is Mac OS X compatible.

 

Seems SAS was compatable in the last decade, but they dropped it. Now SAS is not OS X compatible but

still with the "SAS OnDemand" version Mac users can easly get a hands on experience.

 

Rationale 2:

The Mac Pro Beast has 7.2 TFlops of computing power with the help of dual ATI graphics cards.

It would be awesome to install Alteryx on one... 

 

I use a mouse which has a horizontal scroll wheel. This allows me to quickly traverse the columns of excel documents, webpages, etc.

 

This interaction is not available in Alteryx Designer and when working with wide data previews it would improve my UX drastically. 

It would be nice to have a visual cue for a detour tool's configuration. This is especially the case when testing with several detour tools in a workflow - see the cleanse.yxmc screenshot below. I added an annotation to one of the detour tools as a possible solution.

 

Any of these options that would save the additional click would be appreciated.

  • Default annotation shows "Detour left" or "Detour right"
  • Detour outgoing wire highlighted (mentioned in Detour dashing)
  • Detour direction outgoing anchor that is NOT used is grayed out
  • Detour direction outgoing wire that in NOT used is grayed out
  • Detour tool has a left/right toggle
  • Detour tool changes color when set to detour right

Personally, I prefer that the outgoing anchor and outgoing wire not in use be grayed out. But even the default annotation stating the direction would be helpful.

 

Does anyone else have a preference or other ideas on the visual cues?

 

Detour in cleanse tool.png

When switching modes sometimes it reboots and looses all the code:

Python Tool Bug.gif

DELETE from Source_Data Where ID in

SELECT ID from My_Temp_Table where FLAG = 'Y'

 

.... 

 

Essentially, I want to update a DB table with either an update or with the deletion of rows.  I can't delete all of the data.  My work around will be to create/insert into a table the keys that i want to delete and try to use a input/output tool with SQL that performs the delete.  Any other suggestions are welcome, but a tool is best.

 

Thanks,

Mark

Please enhance the dynamic select to allow for dynamic change data type too.  The use case can be by formula or update in an action for a macro.  If you've ever wanted to mass change or take precision action in a macro, you're forced to use a multi-field formula.  It would be rather helpful and appreciated.

 

Cheers,

 

Mark

We could really use a proper API Tool for Input, rather than rely on curl queries, etc. that end up requiring many tools to parse into a proper table form, even using the JSON tools!

I for one deal regularly with cloud APIs, and pulling their data. We need an API Input tool that can handle various auth methods, Headers, Params, Body data, etc and that will ALSO handle converting the typical output (JSON) into two outputs - Meta info, and the table-compatible info.

 

I'm moving from direct SQL query to using API, and I literally have 15 Tools and steps required to create the same table data that the single SQL query tool gave me. In one case, I have to have an 18 tool Container that just handles getting a Bearer Token before I can pass that on to another container that actually does the curl query, etc and it's 15 tools needed to manage the output JSON into proper table-style data. (Yes, I already use the JSON tools, but the data requires massaging before that tool can work right).

 

As an add-on, we should also be able to make aliases for the API connection so we aren't having to put user/pass information into the workflow at any point. Interfaces are nice, but not really useful in automated workflow runs.

 

There's got to be a better way! 

  • API SDK

It would be great if we could set the default size of the window presented to the user upon running an Analytic App. Better yet, the option to also have it be dynamically sized (auto-size to the number of input fields required).

Every time I add a tool container I default the Margin to "none." Could you make a default selection part of user settings? Thank you.

Please add support for windows authentication to the download tool.  I know there's a workaround but that involves using curl and the run command tool.  The run command tool is awful and should be avoided at all costs, so please improve the download tool so I can use internal APIs.

The Idea behind the Password Masking is - we have "Download Tool" from the "Developer Tab" - which is used to Download files from the given site. For example, let's take Mainframe. I have a scenario where the Alteryx Workflow should connect to the Mainframe FTP Server, download the required file which is used for downstream transformation. For the download, I get the Username and Password information from the Database table (to reduce manual intervention and prevent errors). While passing the Username and Password as a parameter to the Download Tool Macro (Custom Macro - accepts the Username/Password, Filename dynamically) - the Alteryx Workflow will obviously show the username and password in the result window (as it is considered as an output data from Input Tool). Now I want that particular password field to be masked, so whenever the particular Workflow is shared to the User - the password field remains unexposed. I know there's a way to mask a particular field using "MD5 HASH" formula, but that helps to mask anything related to Dataset and not a password (as it may consider it as a new string and not a valid password). This feature would be really beneficial to Developers who use the download tool often. A New Tool or a Custom Macro - embedding this feature would be great for users who needs Masking functionality.

We need some way (unless one exists that I am unaware of - beyond disabling all but the Container I want to run) to fire off containers in particular order.  Run Container "Step1" then Run Container "Step2" and so on.

When I proceed with this command in a python tool:

 

from ayx import Package

Package.installPackages(package='pandas',install_type='install --upgrade')

 

in Alteryx it only updates to 0.25, but the Latest version is 1.1.2.

 

When I would like to upgrade from the Python side i get the following:

ERROR: ayx 1.0.54 has requirement pandas<0.25.0,>=0.24.2, but you'll have pandas 1.1.2 which is incompatible.

 

Can you please make sure we can upgrade to the latest version of pandas without any compatibility issue?

 

This is important because of json_normalize. Really useful tool, available from pandas 1.0.3!

In normal output tool, when file type is csv, it is possible to custom select the delimiter.  It would be great to be able to have the same option in the Azure Data Lake output tool, so for example you can write a pipe delimited file to your ADLS storage account.

The R tool has AlteryxProgress() and AlteryxMessage() functions for generating notifications in the Results window https://help.alteryx.com/current/designer/r-tool, however the Python tool does not. Since I'm writing more Python code than R code I'd like to have similar functionality available in the Python tool, e.g. an Alteryx.Progress() function and an Alteryx.Message() function.

 

Jonathan

 

Hello Dev Gurus - 

 

The message tool is nice, but anything you want to learn about what is happening is problematic because the messages you are writing to try to understand your workflow are lost in a sea of other messages.  This is especially problematic when you are trying to understand what is happening within a macro and you enable 'show all macro messages' in the runtime options.  

 

That being said, what would really help is for messages created with the message tool to have a tag as a user created message.  Then, at message evaluation time, you get all errors / all conversion warnings / all warnings / all user defined messages.  In this way, when you write an iterative macro and are giving yourself the state of the data on a run by run basis, you can just goto a panel that shows you just your messages, and not the entire syslog which is like drinking out of a fire hose. 

 

Thank you for attending my ted talk regarding Message Tool Improvements.

 

 

How about a quick method of disabling a container.

 

Current state - Click on the container, pan the mouse all the way over to the tiny checkbox target in the configuration pane and click disable.

Future state - little icon by the rollup icon that can be clicked to disable/enable, differentiated by perhaps a color change of the minimized pane perhaps?

 

I know what you're thinking, "talk about lazy, he's whining about moving the mouse (which his hand was already on) 2 cm along his desktop and clicking"... but still what an easy usability win and one less click to do a task I find myself repeating frequently.

I would to suggest to add a configuration in the Block Until Done tool, which allow the user to prioritize the release of a data stream through multiple Block Until Done tools in the same module. 

 

In the example below, the objective is to update multiple sheets in a single Excel workbook. Each sheet is a different data stream, that cannot be unioned together, therefore making the filtering of a single stream feeding into multiple Block Until Done from that filter solution impossible.

 

What I would like to be able to do is have a configuration, where Block Until Done #2 will not allow the data stream to pass through until Block Until Done #1 is complete, Then Block Until Done #3 will not pass through the data stream until Block Until Done #2 is complete, and so forth through the all the Block Until Done instances. 

 

ScubaGeek_0-1654554889263.png

 

With the new intelligence suite there is a much higher use of blob files and we would like to be able to input them as a regular input instead of having to use non- standard tools like Image, report text or a combination of directory/blob or input/download to pull in images, etc. I would like to see the standard input tool capable of bringing in blob files as well.

Blob InputBlob InputImage InputImage InputText InputText Input

When working with APIs it is quite common to use the JSON parse tool to parse out the download data which has been returned from the API. However the JSON data may be missing key:value pairs as they are not in the response. This causes issues with downstream tools where there are missing fields. The current workaround for this is to use either the Crew macro Ensure fields, or union on a text input file to force the missing fields downstream.

 

The issue with this is:

1) Users may not be aware of the requirement to ensure fields are present

2) You need to know the names of all the fields to include in the ensure fields macro

 

Therefore the feature request is to add an option to the JSON parse tool to add the model schema as an input.

 

For example with the UK companies house API, to get a list of all the directors at a company the model schema is

 

 

{
    "active_count": "integer",
    "etag": "string",
    "items": [
        {
            "address": {
                "address_line_1": "string",
                "address_line_2": "string",
                "care_of": "string",
                "country": "string",
                "locality": "string",
                "po_box": "string",
                "postal_code": "string",
                "premises": "string",
                "region": "string"
            },
            "appointed_on": "date",
            "country_of_residence": "string",
            "date_of_birth": {
                "day": "integer",
                "month": "integer",
                "year": "integer"
            },
            "former_names": [
                {
                    "forenames": "string",
                    "surname": "string"
                }
            ],
            "identification": {
                "identification_type": "string",
                "legal_authority": "string",
                "legal_form": "string",
                "place_registered": "string",
                "registration_number": "string"
            },
            "links": {
                "officer": {
                    "appointments": "string"
                },
                "self": "string"
            },
            "name": "string",
            "nationality": "string",
            "occupation": "string",
            "officer_role": "string",
            "resigned_on": "date"
        }
    ],
    "items_per_page": "integer",
    "kind": "string",
    "links": {
        "self": "string"
    },
    "resigned_count": "integer",
    "start_index": "integer",
    "total_results": "integer"
}

 

 

But fields such as "resigned_on" are not always present in the data if there are no directors who have resigned. Therefore to avoid a user missing the requirement for unidentified fields needing to be added, if there was an optional input which took the model schema and therefore created the missing fields would greatly improve the API development process and minimise future errors being encountered once a workflow is in production.

Top Liked Authors