Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

It appears that the Marketo Input tool only is focused aroudn getting Lead oriented Objects, however since Marketo is a Marketing Automation tool, it has many more objects such as email performance, landing page performance, web activity, program performance, revenue explorer, etc. 

I'd like to vote up this idea of having marketo input connector having access to extrac data from other objects outside leads. 

Here is the latest Marketo Analytics resources that are available.  Anybody else encounter this bottleneck when trying to source the data out of Alteryx? 

Hello Alteryx Community,

 

I've recently started using Alteryx and one option on the Output Data tool I think that could be useful to others and myself is the option to choose: Append to an extract file (Create if does not exist). This is similar to the already existing Overwrite existing extract file (Create if does not exist) option.

 

My case for this is... I'm in the situation where I'm setting up a flow that I know from the offset is going to be a repeatable flow that is designed to build up data over time and so I will be running the Output Data tools in append mode. Except for the first run, I can't append to an extract that doesn't exist! The flow in question has over around 20 Output Data tools and while it wouldn't take terribly long to reconfigure after the initial run, it is a bit tedious. I think there is scope for my proposed option for being implemented either as a standalone option or to replace the current append option.

 

Example of my current flow:

Capture.PNG

I would be great to have a json-stat parser. There are probably ways of doing it with the JSON parser but it is appears to be a little bit tricky.
Also, it would be nice to be able to use a json-file as input in a simple manner.

When I setup an In-DB connection I need a way to select only the tables I want to see. Basically a way to favorite the most frequenently used tables and also the ability to add a description of the information that resides in that table. Use Case: Because we have so many irrelevant tables with no data dictionary in some of our databases this would save a ton of time to narrow down the tables I can select right off the bat.

The new insight tool offers some great charting abilities but it does not integrate with other reporting tools. The tool doesn't support pictures,tables or any way to pull text from the data in the workflow in. This really prevents it from being a solution to any of the my reporting needs.

When using the 'Select' tool, often many columns are deselected, making it difficult to locate the remaining selected columns.  It would save time to move deselected columns to the bottom of the ‘Select’ tool configuration after leaving the tool.  Both selected and de-selected columns should retain their incoming field order within the group.

suggestion.png

One of those small little annoyances that can add some extra time to development is how when you browse for a file (either with the Input Tool or Output Tool) it always defaults to the most recent location of where you either picked up a file or output a file.

Many times I have existing Input Tools or Output Tools that I simply need to repoint (meaning they already have a file location mapped to either read or write to.)  For these, it would be great if, when the user clicked to File Browse, the initial folder location displayed was the same folder where the current file is mapped to. 

Perhaps displaying the most recent folder in the file browse interface may be best suited for when a file has not been mapped yet.

After developing complicated workflows (using over 200 tools and over 30 inputs and outputs) in my DEV or QA environment, I need to switch over to Production to deploy it, but it's incredibly annoying to have to change 30 data inputs individually from QA to Prod, DEV to QA, etc. If I need to go back to QA to change something and re-test, I have to do it all again. etc.etc.


I need a way to be able to change mass amounts of data sources at once or at least make the process a lot more streamlined to make it bearable. Otherwise it is incredibly difficult to work within multiple environments.

Curl currently doesn't have Secure protocols supported. Please find below screen-shot. We are currently using Alteryx 11.7.6

 

curl.png 

 

Can Alteryx take this as feature request and add the secure libraries to existing cURL tool so that it can support the secure SFTP protocol.

 

In the Alteryx SharePoint list tool, Alteryx fails to authenticate using to connect to SharePoint list that is protected by ADFS. There Sharepoint sites outside of our company's firewall that use ADFS for authentication.  We would like to connect to those sites via the Sharepoint List tool.

I have a user who has a batch macro set up for processing customer records as they relate to stores. The macro loops based on the storeID. In some cases, some stores do not have customers and the user would like to have an "Exit" button so that the rest of the tools downstream won't run if no records are present and the batch macro will start over automatically with the next store.
Can you add an ExceptionHandler to the Tile Tool? The tool crashed on a large dataset.  I got an Error: Tile (1): No values found before GetMean() on the tile tool. I selected Smart Tile option on the ‘unique_zips_count’ field and grouped by ‘ID’.
To track the problem down, I had to use the sample tool to grab x number of recs and see if it would run through the Tile tool. I had to keep skipping and selecting first N recs until I narrowed the problem down to 20 records. As it turned out. all values were 0 in a specific group. I found a workaround by pulling all recs per group with a value of 0 and bypassing these with the Tile tool. Instead of doing that - could you add an ExceptionHandler and specify which RecNo it crashed on?

Can you also add option to use 1, 2, or 3 std dev in addition to smart? This way all my groups will be uniform.
  
I think there should be a tool that allows you to produce grand totals for any numeric field you want. In the tool, you should be able to check off the fields you wish to be totaled at the bottom. I prefer this over having to use the summary tool and then using the union tool to produce totals at the bottom of my output. 

Was thinking with my peers at work that it might be good to have join module expanded both for desktop and in-database joins.

 

As for desktop join: left and right join shows only these records that are exclusive to that side of operation. Would it be possible to have also addition of data that is in common?

As for in-db join: db join acts like classic join (left with matching, right with matching data). Would it be possible to get as well only-left, only-right join module?

 

 

Hi, I've seen some requests lately where the users are requesting maps in  EPS (Encapsulated PostScript) format, which is an

Adobe Illustrator file type. If this could be added as a Report Render output type, along with BMP, it would make the tool even more useful. Thanks!

Please have the Calgary Tools put the file names in the annotation automatically like all other input/output tools. 

NOTE: There are other Idea posts for improvement of the Browse Profiling functionality, but I did not find anything specific to this and feel these ideas should be segregated anyway. 

 

I just discovered that the plot in the Browse tool profiling section when plotting numeric values has differing behavior. 

According to the documentation, "Once more than 10,000 unique values are profiled, binning is applied to increase performance and to represent data in a a more meaningful way."

What this means is that for numeric data, a scatterplot is shown if there are less than 10,000 unique values, and a frequency plot (bar chart) is shown if more than 10,000 unique values. There is then an indication that "Only the top 20 unique values are shown". 

 

I can see where with some situations (e.g., an integer value), a frequency plot that shows the more predominant values would be a good thing to see.

 

However I would argue that a frequency plot of numeric data that is basically a “double” data type can be pretty meaningless…since out of 10,001 values, you might have 10,001 UNIQUE values…so you end up with a frequency plot that is not of much value (where as the scatterplot would still allow a user to see the dispersion of the ENTIRE data set).

 

I’ve attached an example to easily show this.

 

It would be great if the user could choose the plot he wants for a specific set of data…similar to the choices that occur when a date field is present in the data.

I came across the Find Replace Tool when I needed to find values from a column in one table in a column in another table. My first instance to solve the problem was to write a batch macro with a contains function in a formula followed by a not null filter (see attachment). This worked perfectly besides the fact that it was slow. Then I got excited when I discovered the Find Replace Tool accomplishes the same thing WAY faster, but I was wrong.

 

What I would love is the equivalent of an SQL query like this:

 

SELECT

    A.1

    B.1

FROM A

    INNER JOIN

        B

    ON A.1 LIKE "%" || B.2 || "%"

 

which is a legal query in SQLite and is equal to the output of the attached macro. This is what I wish the Find Replace tool could do (Or a different tool), but it only finds one instance per "Find Within Field" value. The tools decision making doesn't line up with the decision-making that I need, for example it doesn't return the longest values found, instead the one with the first key to appear in the field. One way I've found to configure it better is to string a number of these together, that will give me a better result but still won't find every instance and uses 90 or so tools when I feel I should only need 1-3 to accomplish the same thing.

 

Instead of an Inner Join, the Find Replace is more like of Left Outer Join followed by a Unique() on A.1. Is there a way to accomplish this out-of-database in Alteryx?

I'm using .sv file format for compressing large files and using them in Alteryx.  .sv is call a Alteryx Spatial Zip file.  This format seems to offer the highest level of compression of all the Alteryx file format types.  Is this suitable for text also?  Is it usable in real time vs uncompressing first then using in a workflow?  If not, I think a compressed real time file format would be a nice addition.

I love the new Basic Filter option and it makes writing filters easy/self-expanatory for new users and helps with functions like isnull(). However, I'm almost always writing what are now called 'Custom Filters' and it takes two clicks to do this.

From a User Experience perspective it would be great if you could click in the expression box to write the filter rather than having to check the customer filter check box, or start by creating a basic filter and then make it more complex using the custom filter box without having to check the tick box.

Following on from this and something that is probably a lot harder to implement - But with the new basic filter, it would be great to have a list of available fields populate the options to filter by - Similar to the option that you get in Calgary. E.g Country == and then the drop down list would contain whatever is available in the data (e.g. Spain, UK, USA, Etc...)
Top Liked Authors