The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

I have a problem when transferring records between different O365 Sharepoint Sites.  It seems that Alteryx cannot maintain 2 separate connections at the same time.  I can transfer fine if I read from one site to a temp file and then, in another workflow, read from the file and write to the second site.

 

I can work around the problem using Block until Done, but there are some situations where I need to be able to compare between lists in 2 different sites and write back to one or both depending on the results.  it would be much more convenient to be have multiple connections open simultaneously.  I'm aware that Alteryx uses the SharePoint API to move information around.  This API does allow multiple connections.  I'm not familiar with the internals of how Alteryx accesses the API, perhaps the OAuth token is shared through out the workflow process,  but this should be posssible

 

Thanks for considering this

 

Dan

   

 

A lot of popular machine learning systems use a computer's GPU to speed up some of the math to a huge degree. The header on this article on Medium shows a 15x difference from a high-end CPU vs a high-end GPU. It could also create an improvement in the spatial tools. Perhaps Alteryx should add this functionality in order to speed up these tools, which I can imagine are currently some of the slowest.

When I add a data connection to my canvas - it's only added to the Data Connections window under certain circumstances (e.g. when I use an alias, or the SQL connection wizard) rather than showing ALL data connections.

 

Given the importance of data connections for Alteryx flows - it would be better if ALL data connections were grouped together under a Data Connection Manager, which was as visible as the results window not buried deep in the menu system - and you could also then use this spot to change; share; alias etc.

 

 

 

2018-06-01_6-22-37.png

 

In Microsoft SSIS there's a useful example of how this could be done - where the connections are very visibly a collection of assets that can be seen and updated centrally in one place.    So if you have 5 input tools which ALL point to the same database - you only need to update the connection on your designer in one place - irrespective of whether this is a shared connection or not.

 

2018-06-01_6-27-45.png

One of the biggest areas of time spent is in basic data cleaning for raw data - this can be dramatically simplified by taking a hint from the large ETL / Master data Management vendors and making this core Alteryx.

 

Server Side

- Allow the users of the server & connect product to define their own Business Types (what Microsoft DQS calls "Domains")

       - Example may be a currency code - there are many different synonyms, but in essence you want your data all cleaned back to one master list

- Then allow for different attributes to be added to these business types

       - Currency code would have 2 or 3 additional columns: Currency name; Symbol; Country of issue

- Similar to Microsoft DQS - allow users to specify synonyms and cleanup rules.    For example - Rupes should be Rupees and should be translated to INR

- You also need cross business type rules - if the country is AUS then $ translates to AUD not to USD.

- These rules are maintained by the Data Steward responsibility for this Business Type.

- This master data needs to be stored and queryable as a slowly changing dimension (preferrably split into a latest & history table with the same ID per entry; and timestamps and user audit details for changes)

 

Alteryx Designer:

- When you get a raw data set - user can then tag some fields as being one of these business types

      - Example: I have a field bal_cur (Balance Currency) - I tag this as Business Type "Currency"

- Then Alteryx automatically checks the data; and applies my cleanup rules which were defined on the server

- For any invalid entries - it marks these as an error in the canvas; and also adds them to a workflow for the data steward for this Business Type on the server -  value is set to an "unmapped" value.    (ID=-1; all text columns set to "unmapped")

- For any valid entries - it gives you the option to add which normalised (conformed) columns you want - currency code; description; ID; symbol; country of issue

 

Data Steward Workflow:

- The data steward is notified that there is an invalid value to be checked

- They can either mark this as a valid value (in which case this will be added to the knowledge base for this business type) or a synonym of some other valid value; or an invalid value

 

Cleanup Audit & Logs:

- In order to drive upstream data cleaning over time - we would need to be able to query and report on data cleanups done by source; by canvas; by user; by business type; and by date - to report back to the source system so that upstream data errors can be fixed at source.

 

Many thanks

Sean

I suppose I could just bookmark this page, but that wouldn't help others.  I frequently forget (I'm getting old) the format strings while creating custom datetime formulas.  Is there a quick way to get to these format strings when in the context of creating a datetimeparse/datetimeformat formula?

 

Cheers,

Mark

Regularly put true in or false in expecting it to work in a formula. 

When moving external data into the database, the underlying SQL looks like:

 

CREATE GLOBAL TEMPORARY  TABLE "AYX16020836880b41e08246b59ee8c"

...

 

My client would like to add a prefix to the table as:

 

CREATE GLOBAL TEMPORARY  TABLE MMMM999_DM_USER."AYX16020836880b41e08246b59ee8c"

 

where MMMM999_DM_USER is supplied in the configuration.

 

A service account  automatically sets the current session to something like MMMM999 (alter session set curent schema=MMMM999;)

I saw this article (Oculus App Makes Programming Tangible To Non-Coders) and immediately thought of Alteryx.

 

How about a virtual reality based version where the user can be in the canvas and reach out and touch their data directly?

I regularly create events to capture messages from workflows or kick off batch scripts for other processes and they are repetitive. Is there a way to template some of these?

 

This could even be as simple as a saving the .yxft type file, where it is only saving the setting.

 

Email Event TriggerEmail Event TriggerRun Command Event TriggerRun Command Event Trigger

There is a lot of usage of calendar events in business world. Having a native sync and input functions for popular calendar formats like ical or google calendars will save a lot of time

I want modification of the Email Tool to support running it at a specific point, defined by developer, within a workflow where currently "The Email tool will always be the last tool to run in a workflow". 

 

We use the tool to send notification of completion of jobs and sometimes attach outputs but we would like to be able to also send notifications at the start or at key points within a workflows processing.  Currently the email tool is forced to be the last tool run in a flow, even if you use block until done tool to force order of path execution to hit the email tool first.

 

If we could add a setting to the configuration to override the current default, of being the last tool run, to allow it to run at will within a flow that would be awesome!  And of course we would want the same ability for texting, be it a new feature of the email tool or a new tool all its own. 

 

The Texting option refers to an issue in Andrew Hooper's post seeking enhancement of the email tool for texting, search on "Email tool add HTML output option" or use link...

https://community.alteryx.com/t5/Alteryx-Product-Ideas/Email-tool-add-HTML-output-option/idi-p/92114...

 

Would be nice to select a bunch of consecutive fields, and cut them and paste them to a different area.  Currently, the only options are to Move to Top or Move to Bottom.  If you want to move somewhere in between, you have to scroll through the whole list.

tl;dr It would be great if auto-detected assets on output tools were included when exporting/saving to the gallery.

 

Suppose I have an output to my C drive and try to package that file when exporting or saving. It gives me the option to package my file:

Capture.PNG

The only problem is, that file isn't actually saved with the package; instead, it just creates an externals folder where it will write the file to. But the file itself isn't included. The current work around is to go to your output tool and add that file manually as a user asset:

 

 

Capture.PNG

Notice that I had to manually add the same file that was already auto-detected. Now when I go to export, I get the same screen as before:

Capture.PNG

The big difference is that now that I've added the file as a user asset, the file itself is included in the export.

 

In conclusion, it would be great if auto-detected assets on output tools were included when exporting/saving to the gallery (so that it has the same behavior as user-added assets).

When choosing "In List" values in a CYDB input, the normal Windows functions do not work (shift+click, ctl+A, ctl+click, etc.).

 

When having to choose, say, 20 values, it is a big annoyance to have to click each value (20 clicks). 

 

Have been told this is a bug so I wanted to put it on your radar for a fix.

On the RedShift Bulk Loader add support for Redshift options:

 

http://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-conversion.html

 

  • TRUNCATECOLUMNS (automatically truncates any fields to the defined in the table)
  • IGNOREBLANKLINES
  • FILLRECORD
  • TIMEFORMAT
  • TRIMBLANKS
  • ENCODING
  • EMPTYASNULL
  • DATEFORMAT

...

Given redshift prefers accepting many small files for bulk loading into redshift, it would be good to be able to have a max record limit within the s3 upload tool (similar to functionality for s3 download)

 

The other functionality that is useful for the s3 upload tool is ability to append file names based on datetimestamp_001, 002, 003 etc similar to current output tool

Hi,

 

It'd be great to have a specific connector for Hubspot. It' a marketing automation Platform such as Marketo.

 

Thanks.

When I put a map snippet into a report, there is no border. Therefore, road segments just terminate into whitespace.

 

Please add an option to create a border around the map snippet.Map With No BorderMap With No Border

 

 

Map With A BorderMap With A Border

 

 

 

 

 

 

Would be nice if could use something like $Field rather than repeating the field name in the Condition and Loop expression within the Generate Rows tool

 

 

Hi Folks

 

So have been using the fuzzy match function quite a bit of recently. Love the tool, however it could benefit form being able to wire in a list of 'Don't generate' keywords. 

At the moment we can enter them manually, however where for example i might want to exclude city or area names etc... from the do not generate list this becomes quite a tedious manual entry task, so being able to load in keyword data from pre-existing lists etc.. would be a time saver. 

 

Cheers

 

Gavin

 

 

Top Liked Authors