Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

You can select all fields at once, but it'd be nice to select a chunk of fields using CTL+Shift+Click

Request is to add in the parameter to control sampling level in the Google API connector. I'm getting very different results pulling the same report from the API and the GA UI. The API data has significantly more variability which is evidence of sampling. We have a premium 360 account and are still getting the sampling results, I believe its just necessary to add in the parameter in the outgoing script providing: 

 

samplingLevel= DEFAULT, FASTER, or HIGHER_PRECISION.

 

https://developers.google.com/analytics/devguides/reporting/core/v3/reference#sampling 

https://developers.google.com/analytics/devguides/reporting/core/v3/reference#samplingLevel

 

- Zach

Using the download tool is great and easy to use. However, if there was a connection problem with a request the workflow errors out. Having an option to not error out, ability to skip failed records, and retrying records that failed would be A LIFE CHANGER. Currently I have been using a Python tool to create multi-threaded requests and is proven to be time consuming.

I'd like to see the DateTimeLastOfMonth and DateTimeFirstOfMonth functions be more flexible then just getting the first or last date of the current month. It would be great if you could point to a date field and have it give the first or last date of that month. i.e  DateTimeLastOfMonth([randomdate]) and if the [randomdate] = December 3rd, 1981, the result would bring back 1981-12-31

Hi , Currently we have 3 methods to calculate distance.  K-means, K-median and Neural gas. Can we also include DBSCAN (Density based spatial clustering) as one of the method. I can colloborate with product owners of alteryx to make this happen. Please let me know.

For an email event, there are several defaults that are shown in the body (e.g., %User%, %ComputerName%, %WorkingDir%).

It would be nice to add a date in there, maybe %Date% or %DateTime% or something like that which would display the computer system time.

Would save an extra select for parsing text files in correct format for dates and times.

 

 

The Image Input and Image Template tools in the Computer Vision category do not accept relative path names.  This makes it difficult to write a workflow in a test environment and then run it in the operating environment.  (Someone in the operating environment has to diddle with the Image Input path name, and the Image Template seems to require a complete re-training to spot the areas needed for scanning.)

 

ALSO the drop-down choices for "the most accurate option for what product component this enhancement would effect (sic)" does not offer the correct option of "Computer Vision".

The current SharePoint API pull tool does not support the pull of managed metadata columns. It would be great if Alteryx would update the SharePoint List tools to be able to read in managed metadata columns.  

I'm plotting ship's routes on a map by using the Poly-Build tool to create a Sequence Polyline, the source field is a daily snapshot of a ship's position, the sequence is the date. This works well until the ship goes off the map, i.e heading west in the North Pacific to Japan.  Rather than wrapping the line around the world, it draws an ugly line across the map:

 

spainn_0-1648051649989.png

 

I've seen some super clever WFs on here where people have manipulated coordinates to stop this happening, but rather than me having to do that (and probably getting it wrong), could you make the Poly-Build tool map sensitive?

 

 

Similar to the post from @MarqueeCrew here: https://community.alteryx.com/t5/Alteryx-Product-Ideas/In-Database-Update-and-or-Delete/idc-p/72744#..., there is a need to increase the ETL functionality of Alteryx to allow this to serve the needs of an enterprise BI audience.

 

Specifically: 

- Bulk file sync.   Similar to SSIS, the abilty to very quickly bring a file in a staging area up-to-date with the latest in the source

- Dimension update.  Built in macros to make dimension update (especially for slowly changing dimensions) easier - these would take care of the various time-dimensions, and checking for surrogate keys - and also add in translation tables

- Central registry: register a central list of shared dimensions, shared fact tables, etc

- Symantic layer: where several teams use different identifiers for a particular concept, such as customer.   By marking a particlar field as "Customer", the Alteryx engine can make more intelligent decisions about how to normalise these to a confirming dimension

- Simpler logging of ETL errors (similar to the ETL logging recommended by Kimball)

 

A focus on large-scale BI & ETL applications like this will really help to allow Alteryx to bridge from point solutions to a broader spectrum of opportunities in large-scale enterprise BI.

 

 

Hello,
Recently I have been working with directory tools to pick up specific files based on where the workflow is running from. 

For an example to help illustrate my idea/fix, I have a folder on my desktop called 'Idea', I have a simple workflow and a couple csv files:

TheOC_0-1647965537747.png



I have my directory tool setup pointing at this folder - to pick up all three csv files:

TheOC_1-1647965572216.png

 

 

Now i want to make the path relative. Currently the workflow is saved within this folder, so in order to pick up all files in this location, as expected i would want the directory to be changed to "." (for reference, https://desktop.arcgis.com/en/arcmap/latest/tools/supplement/pathnames-explained-absolute-relative-u... ).

To make the paths relative, I use Options --> Advanced Options --> Workflow Dependencies

TheOC_2-1647965758754.png

I can then see my paths/dependencies:

TheOC_3-1647965802353.png



And hit 'all relative':

TheOC_4-1647965813130.png



It has changed to '.' as required, great. However when i hit okay and try to run the workflow:

TheOC_5-1647965845398.png



In order to get this to work, i have to add a "/" to the "." - making it "./".

 

This may seem like a small pain, however if a user does not understand relative paths, its not clear the difference between "./" and "." .

 

My suggestion would be either:
1) Directory tools with relative paths to the location are set to "./" 

2) MakeCleanPath() function (backend of the tool) is changed to allow for relative paths to the same location using "." .

 

Cheers,
TheOC

Hello,

I would like to allow my Gallery User to select the fields in my in database workflow, just like we can do in-memory. As of today, it's just impossible to do that.


Best regards,

Simon

Hi!

 

Can you please add a tool that stops the flow? And I don't mean the "Cancel running worfklow on error" option.

Today you can stop the flow on error with the message tool, but there's not all errors you want to stop for.

 

Eg. I can use 'Block until done' activity to check if there's files in a specific folder, and if not I want to stop the flow from reading any more inputs.

 

Today I have to make workarounds, it would be much appreciated to have a easy stop-if tool!

This could be an option on the message tool (a check box, like the Transient option).

 

Cheers,

EJ

How about a web scraping tool, or web scraping tab?

 

There are so many great web sites out there with all kinds of data, spot pricing for currency exchange, weather data, ship locations, pricing, no way to list them all here.  

 

The idea is to have a tool or tab focused on scraping web data and performing analysis, or better still scraping web data and using with your own data.  For instance, wouldn't it be great to compare sales data with weather data and see if temperature, or precipitation affects sales?  

I am trying to schedule something to run every 2nd workday of the month (which will not be the 2nd day of the month every single month because the first day of the month might land on a Friday which would make the second workday the 4th of the month). Is there a way to implement this in a schedule? 

 

For a few months we have used the custom schedule since we know the specific dates they would land on but it doesn't seem like the most efficient.

 

Thanks for you help. 

 

 

Think of a pivot table on steroids. In my industry, "strats" are commonly used to summarize pools of investment assets. You may have several commonly used columns that are a mix of sums and weighted averages, capable of having filtering applied to each column. So you may see an output like this:

 

Loan StatusTotal Balance% of Balance% of Balance (in Southwest Region)Loan to Value Ratio (WA)Curr Rate (WA)FICO (WA)Mths Delinquent (WA)
Current$9,000,0009080854.57200
Delinquent$1,000,00010100955.56204
Total$10,000,00010090864.67100.4

 

Right now, I feel like to create the several sums and weighted averages, it's just too inefficient to create all the different modules, link them all together and run them through a transpose and/or cross tab. And to create a summary report where I may have 15 different categories outside of Loan Status, I'd have to replicate that process with those modules 15 times.

 

Currently, I have a different piece of software where I can simply write out sum and WA calcs for each column, save that column list (with accompanying calcs) and then simply plug in a new leftmost category for each piece of data I'm looking at. And I get the Total row as well auto-calculated as well. 

 

I'm sure no one want's to do double dipping on huge data sets even if it's in-db...

So can we have,left and right joins in the in-db join tool as well to further develop workflows from these two additional outputs?

Ps: the idea originally belongs to another Alteryx client mentioned this in IT central; https://www.itcentralstation.com/product_reviews/alteryx-review-38876-by-prometheus-tito-amoguis-ii

 

Picture1.png

Best

I am very new to Alteryx, just beginning to learn the ropes! I would like to provide feedback because my experience right now may provide an opportunity to make suggestions regarding user experience!

 

I just have one (minor) suggestion for the multi-row formula tool's interface:

 

gillburling_2-1610038858322.png

 

The dropdown field (circled above) sits directly below the "create new field" radio button. This lead me to assume that it applied to the "create new field" option and not the "update existing field" button which I had selected.

 

Yes, there were a couple of clues that may have helped me see this had I been paying very close attention. Unfortunately, in the chaos that is learning a new technology (especially one as expansive and powerful as Alteryx) I didn't not catch those clues and instead spent a few hours trying to figure out why my calculation wasn't doing what I wanted it to do.

 

I imagine the drop down would be better served by being separated from the radio buttons (by space or a line). But there are any number of ways that this design could be improved.

 

This might seem nitpicky to some, but I think it's important to strive for technology that is as "human friendly" as possible! Thanks!

Wild card read option (Sales_*.csv ) works for reading files of the same structure available on a standard windows network/local fileshare. It would be good to have the same feature for reading from HDFS via the HDFS connection option using the input tool.

 

 

 

Regards,

Sandeep.

 

Top Liked Authors