Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

With the continued growth of Graph Databases, it would be nice for Alteryx to creates a new tool set that would allow input/output connectors for Graph Databases like Neo4j which software tools like Pentaho and Talend already have.

 

Keith. 

I have a process that joins 3 data sets to identify a specific group of data and apply certain ruling. From this created file, I need to extract the data (not the headings) from specific columns and insert into an already existing template. The template has formatting that needs to remain in order for it to function. 

 

Is this possible? 

Would be nice if in Designer customer's may want to upload and reference  a " DATA DICITIONARY - METADATA REPOSITORY file when working with various input source to transform data . 

Organizations that are mature in their data governance strategy implement special software that extracts, manages and provides access to data dictionary of data assets in multiple databases such as  ERWIN to maintain schema for enterprise. 

Within DESIGNER access to a file METADATA REPOSITORY held in DESIGNER customer may easily select a list of columns  fields or attributes from that file to manipulate data elements using DESIGNER and provide all the relevant information required they wish to massage the data.

Possible Attributes that may be in data dictionary file:

Table name

Column Name

Data Type
Foreign Table

Source

Table Description

Sensitive Data

Required Field

Values

While In-db tools are very helpful and cut down the time needed to write complex SQL , there are some steps that are faster by directly writing SQL like window functions- OVER (PARTITION BY .....). In Alteryx, we need to create multiple joins and summaries to perform a window function. It would be immensely helpful if there was a SQL editor tool for in-db workflows where we can edit the SQL code at any point in the workflow, or even better, if they can add an "edit" function to every in-db tool where we can customize the SQL code generated and then send to the next tool.

 

This will cut down the time immensely and streamline the workflow to make Alteryx a true contender for the ETL solution space.

Hello Community,

 

 

I was wondering if there is a tool that could de-duplicate records after serializing (or after using Transpose Tool) with a given priority for each field in one of the keys? i.e.

 

ID Origin Field Name Value
1 A NAME JACK
1 B NAME PETER
1 B ZIP CODE 15024
1 C ZIP CODE 15024
1 D TYPE MID
1 H TYPE

PKL

 

Assuming for the field name NAME, the priority should be [ A, B ]

ZIP CODE -> [ C, B ]

TYPE -> [ H, D ]

The expected outcome for Id 1 should be -> JACK, 15024, PKL

Record discarded -> PETER, 15024, MID

In this case I'm using ID and Origin as keys in the Transpose Tool.

 

I just want to make sure there is no other route than the Python Tool.

 

Thank you

 

Luis

Was thinking with my peers at work that it might be good to have join module expanded both for desktop and in-database joins.

 

As for desktop join: left and right join shows only these records that are exclusive to that side of operation. Would it be possible to have also addition of data that is in common?

As for in-db join: db join acts like classic join (left with matching, right with matching data). Would it be possible to get as well only-left, only-right join module?

 

 

Hi, I have searched through the community, and I wasn't able to find a duplicate for this idea. If in fact there is, I apologize and please point me to that post. I think that it would be a good idea to have date options in the summarize tool that would allow for grouping at higher levels of the date. I often have a date field that is specific to the day (i.e. 2018-01-01), and I just want to group by the year or month. Currently in order to do this, I have to create a formula before the summarize tool that formats the date according to how I want to group it, and then I am able to group off that field in the summarize tool. It would be nice if in the summarize tool, I could select the date field, and then have the option to group it at year, month, week, etc. 

Please create the ability to Concat a field in the In-DB Summarize Tool similar to the regular Summarize tool. This would enable much faster processing on concatenating fields using the database's processing power vs. the local machine.

Hi,

This feature isn't a must - but would definitely be a nice to have.

Similar to the excel having a tab with key figures like average, count and sum 

It would be a really good idea to do something similar within Alteryx just to have a quick glance on key figures/functions (example attached - apologise for the bad paint job but definitely would look good with Alteryx colour scheme)


Thanks


These tools seem to be volatile, as in if you click on them before you run the workflow they lose their configuration. This is infuriating. Can we change this to be like every other tool where you can copy, paste or click into it at any time and it remembers its config.

 

Nick

PLEASE add a count function to Formula/Multi-Row Formula/Multi-Field Formula!

 

I have searched for alternatives but am just confused about how to store the result for the total number of rows from Summarise or Count Records in a variable that can then be used within a Formula tab. It should not be that difficult to just add equivalents to R's nrow() and ncol().

Improve Help Documentation or in-tool options for handling null values in statistical tools like Weighted Average or Linear Regression. For instance, checkbox to remove null value records, or at least warn users.

 

In the processing of learning to perform linear regression in RStudio and Alteryx, I came across differing outputs depending on how null values were addressed. Take the Weighted Average tool for example.

 

In R, the weighted.mean function treats null values in the variable of interest as if they were not there. If the user does not specify that null values exist, the result is NA. If any null values exist in the weight field, the result is NA.

 

Since I am more familiar with Alteryx, I originally did the data preparation—including calculating the weighted means—in Alteryx. When comparing these weighted means with those generated in R, I found that Alteryx treats the null values as zeros (i.e. includes them in the calculation). The user would have to know this is incorrect and first filter out the null values. See screenshot examples.

 

 

 

This is also the case within the Linear Regression tool. If null values are not omitted prior to regression, the results are wildly different. Perhaps this is known by more experienced users/statisticians, but this incorrect usage would have gone on unbeknownst to be had I not cross-checked with RStudio.

 

Weighted Average in AlteryxWeighted Average in AlteryxWeighted Mean in RWeighted Mean in R

Another seemingly minor one that would just make life a little easier and clean up workflows.  I often find myself renaming the Name/Value fields from a Transpose to be more descriptive.  Currently this requires a Select tool after each transpose, and it would be nice to put a couple of text boxes at the bottom of the transpose to rename Name/Value directly in the tool itself.

Is there a way we can turn on and off any tools in the workflow. This way we can run the tool and when a certain tool is marked off it is not executed. This way we can test the workflow and check different output without deleting the tools existing on the workflow, we can just turn then on or off.

Hello Alteryx Devs - 

 

When I got to write some scripting in the formula tool, my data stream properties should be the first to be suggested once a user starts typing a letter, not the last. 

 

uppercase(Ad -> gives me:

 

DateTimeAdd

FileAddPaths

PadLeft

PadRight

ReadRegistryString

[Address]

 

I think we would need a dedicated R macro to ascertain the chances anyone in is going to need [ReadRegistryString] before they need a column of their own data that starts with [Ad...]

 

Easy fix.  Makes a big difference.  

 

Thanks.

I am on a forecasting project where we convert one vector of forecasts into another vector of forecasts by multiplying by a conversion matrix. This is very clumsy and fragile to do in Alteryx meaning we have to drop out to Excel. The ability to do very simple matrix multiplication in Alteryx would be very useful here and in other use cases. I realise you can probably exit to R and do the job, but for something so basic that shouldn't be required.

 

The relational representation of an mxp matrix is a three column table of cardinality mxp with columns { I , J , A }, where I labels the first index set with index i, J labels the second index set with index j, and A labels the numeric values with value a(i,j).  Given a second pxn  matrix { J, K, B } in relational form we should be able to multiply them to get a mxn matrix { I, K, C} in relational form where of course c(i,k) = sum over j in J of a(i,j)*b(j,k).

 

Vectors can of course be represented as 1x and x1 matrices. If you really wanted to go to town this could be generalised to array processing ala APL2.

Access to only MD5 hashes via MD5_ASCII(String) and MD5_UNICODE(String) found under string functions is limiting.  Is there a way to access other hashing algorithms, ideally via the crypto algorithms from OpenSSL or the .NET framework? 

 

  - https://msdn.microsoft.com/en-us/library/system.security.cryptography.hashalgorithm(v=vs.110).aspx
  - https://wiki.openssl.org/index.php/Command_Line_Utilities#Signing_.2F_Digest 

 

Hashing functions are a very useful tool to have. There are many different types of hashes and each one has tradeoffs for different uses. This can range from error checking, privacy shielding, password protection, forensic analysis, message authentication (HMAC) and much more. See: http://stackoverflow.com/questions/800685/which-cryptographic-hash-function-should-i-choose 

 

- For workflows with data containing existing hashes, being able to consistently create hashes from non-hashed data for comparison is useful.
- Hashes are also useful because they are the same outside the Alteryx environment. They can be used to confirm correct operation of a production system or a third party's external process.

 

Access to only MD5 hashes via MD5_ASCII(String) and MD5_UNICODE(String) found under string functions in the formula tool is a start, but quite limiting. 

 

Further, the ability to use non-cryptographic hashes and checksums would be useful, such as MurmurHash or CRC.  https://en.wikipedia.org/wiki/List_of_hash_functions

Having the implementation benefit from hardware acceleration (AES-NI / CUDA) would be a great plus for high volume applications. 

 

For reference, these are some hash algorithms that could be useful in workflows:

SHA-1

SHA-256

Whirlpool

xxHash

MurmurHash
SpookyHash
CityHash

Checksum
CRC-16
CRC-32
CRC-32 MPEG-2
CRC-64

BLAKE-256
BLAKE-512
BLAKE2s
BLAKE2b
ECOH
FSB
GOST
Grøstl
HAS-160
HAVAL
JH
MD2
MD4
MD6
RadioGatún
RIPEMD
RIPEMD-128
RIPEMD-160
RIPEMD-320
SHA-224
SHA-256
SHA-384
SHA-512
SHA-3 (originally known as Keccak)
Skein
Snefru
Spectral Hash
Streebog
SWIFFT
Tiger

0 Likes

recently loaded the new V11 and gettting used to it.  one immediate gripe is the new version of the Formula Tool no longer supports multiple field actions.  In the prior version I could change Data Types on many fields at once.  I could move multiple fields in a block at once.  there were a few other things but these are things I am sorely missing on my first use of V11.  I created about 20 fields in quick succession just getting names down and then going back and putting in formula which were variations on a theme.  When done I noticed the default DataType was V_WString and I wanted integer.  In the past it was no big deal because I could select the block or interspersed fields and then right click to change data type for all to the same data type.  it was very handy and now appears to be gone.  please bring these things back.

Ever since Alteryx 11 came out, the way dates and DateTimes are handled and computed changed from v10.  Formulas that I had working before no longer work.  The single biggest culprit I tend to see for this problem is that Alteryx 11 no longer seems to be able to intelligently compare Date and DateTime formats.  This is kind of annoying because it forces me to run a DateTime function on all my Date fields for doing comparisons.


For example, I have a formula that I use to calculate if a date is the beginning of the month.  That formula is:

 

IF DateTimeTrim([Snapshot Date],"month") = [Snapshot Date]
THEN 1
ELSE 0
ENDIF

Where in the above, Snapshot Date is a date field with data incoming in a format like "2017-01-01".

 

In Alteryx 10, this formula returned as expected, true.  However, in Alteryx 11, it returns false.  When I dove into this a bit more, I noted that DateTimeTrim will always return a DateTime format, so the formula is attempting to compare "2017-01-01 00:00:00" to "2017-01-01".  For some reason, Alteryx now doesn't think this comparison will result to true.

 

To address this, I now have to do:

IF DateTimeTrim([Snapshot Date],"month") = DateTimeTrim([Snapshot Date], "day")
THEN 1
ELSE 0
ENDIF

My suggestion: Let comparisons between Date and DateTime formats work with the assumption that any Date field is as of midnight that day.  In the example above, Alteryx would implicitly assume that "2017-01-01" is "2017-01-01 00:00:00" for any comparisons to DateTime, like it did in the past.

As my Alteryx workflows are becoming more complex and involve integrating and conforming more and more data sources it is becoming increasingly important to be able to communicate what the output fields mean and how they were created (ie transformation rules) as output for end user consumption; particular the file target state output. 

 

It would be great if Alteryx could do the following: 

1. Produce a simple data dictionary from the Select tool and the Output tool. The Select tool more or less contains everything that is important to the business user; It would be awesome to know of way to export this along with the actual data produced by the output tool (hopefully this is something I've overlooked and is already offered).

Examples:

  • using Excel would be to produce the output data set in one sheet and the data dictionary for all of its attributes in the second sheet.
  • For an odbc output you could load the data set to the database and have the option to either create a data dictionary as a database table or csv file (you'd also want to offer the ability to append that data to the existing dictionary file or table. 

 

2. This one is more complex; but would be awesome. If the workflow used could be exported into a spreadsheet Source to Target (S2T) format along with supporting metadata / data dictionary for every step of the ETL process. This is necessary when I need to communicate my ETL processes to someone that cannot afford to purchase an alteryx licence but are required to review and approved the ETL process that I have built. I'd be happy to provide examples of how someone would likely want to see that formatted. 

Top Liked Authors