Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Analytics Cloud Product Ideas

Share your Alteryx Analytics Cloud product ideas, including Designer Cloud, Intelligence Suite and more - we're listening!

Currently, when a recipe is copied, any data qualities within the original recipe are not duplicated in the copied version. In order to implement a systematic data quality program, the rules must be manually created for every single recipe, which obviously takes a lot of time. It would be great if the data quality rules could persist when the recipe is copied.

Being able to Publish outputs directly to Google Sheets would be a major benefit for Sheets users.

We need the ability the create folders underneath the plans. We can create folders underneath flows, but not underneath plans. Additionally, having the ability to create sub folders inside of these parent flow and plan folders is needed. Hard to organize flows and plans without the ability to put them in categories (folders) and subcategories (sub folders) when you approach hundreds of plans and flows.

I often receive data sets which have rows above the column headers that I don't need. When importing the data set, there is a dropdown on the edit menu to "make the first row a column header". However, I would like for this dropdown to include an option to for example, "make row 20 the column header and delete all preceding rows". This would allow me to import the data already with column headers. When dealing with one dataset, I can always choose any row to make it the column headers, but when you have to join 20 similar datasets, it is not possible to do the same. Not sure if my idea is clear (lol), but it seems like it's something that could be easily incorporated into the tool. Thanks!

Hello,

I need to store many variable RegEx in column to use it in MATCHES function (for example).

But Dataprep doesn't currently support a column as an input in a way that the pattern inside it is read as an actual regular expression.

I think this feature could be a great feature !

Thanks.

more informations about this cese : https://community.trifacta.com/s/question/0D53j00007kB5UmCAK/matches-function-using-pattern-regex-st...


Please redirect the user back to the page where the session has expired, instead of redirecting to home once the user re-authenticates.

Current scenario we are seeing that the user will be redirected to home page instead of the page he was in when the session has expired after set time in the config, in my case after 30 mins. ( this is because sometime user goes to a meeting forgets about the page he was working and he has to re-open everything from home page after re-auth )

 Case: 00027615 - created the case for our issue but came to know that functionality is not present

We had OAuth login issue when trying to set up with SNOWFLAKE as we use OKTA as our IDP for SNOWFLAKE.

We want our users to create their own SNOWFLAKE connector using their personal credentials through IDP which will enforce their role in SNOWFLAKE so they can see only the schema's which they are allowed to see.

We can not create generic connector because it will provide more data access then user needed and involve PII too so we want to utilize their snowflake functional roles to restrict it.

Its a really good use case for anyone using snowflake with IDP and have the RBAC set up with SNOWFLAKE.

Currently, when pivoting a specific field into multiple columns, all other fields you want present in the resulting table must be individually added to "row labels".

First off - It is very time consuming when you have a lot of columns to add.

Secondly - When new columns are added in the source data, these new fields are not automatically included. When this happens we need to:

  1. re-sample the data,
  2. make sure the new column is present,
  3. manually add it to the list in the row labels.


When using an automation tool such as Trifacta I would expect that my flow can deal with new columns being added without having to go and fix my flow every time. Adding an option to add "All other fields" or being able to select the Fields to exclude would make this process much smoother and it would ensure that our flow is future proof.

Currently, If there is use-case that the data needs to brought from tables resting in different databases of a same cluster. We have to create n connections for n databases.

But being in same cluster, one should be able to access different databases with a single connection otherwise the connection list gets long and messy.

At the moment, long formulas are very difficult to read because they cannot be "beautified". Instead of allowing for multi-line text and indentation, our formulas are in a single-line, wrapped textbox. It would be very helpful if Trifacta supported "beautification-enabled" textboxes for formulas so that we can write formulas that are easy to read and understand.

Standardize is an amazing function! ... if you know that you won't have any more values added to a column later. With standardize, it's impossible to account for future source values.

It would be super helpful if there were a way to add additional Source Values (and, accordingly, New Values for those source values) to account for values that might appear in the future (but aren't in your data right now).

I realize there are already a number of ways to account for "future" values. Some examples include if...then...else, condition column > case on single column, condition column > case on custom column. However, these transforms are not friendly for those unfamiliar with coding in low-code tools, and this no-code upgrade to Standardize could help these users.

Please add features to your current "Folder" feature by allowing us to share them with other users, move a flow that has been "shared" with us from the root folder to a sub-folder, etc.

Someone has already submitted an idea for multiple levels of sub-folders which was another request that we had. Thanks.

It will be nice Trifacta to be able to export files in CDM format (Common Data Model) to ADLS gen2 so that they are fed automatically in PowerBI for reporting purposes

Hi,

I use the import by folder for GCS files to import many files in one time (present and futurs files droped in the same folder). Somtimes, the dataschema of files is not exactly the same for all the files but the columns names are always the same ! I'would like to use the "union by name" for the first union of the many files included in the folder that i've imported. With this function, if the dataschema change in the futur, my importation will be ok whatever !

We could have a screen like "recipe union screen" for the "import with union" (for the inports by folder) to select the columns to import and the type of matching for exemple... 

This is a real issue for me because when the datascema of one file has changed, the scheduled RUNs are KO...

Sorry for my bad English, I'm French :-)

Thanks !

At the moment the only edit history visible to Trifacta users is within each recipe. Some actions are done within flows rather than recipes, e.g. recipe creation/deletion/taking union/etc. Such actions are not covered within the edit history, but for compliance purposes/troubleshooting it is important for users to know when these actions were taken/who by. Please would it be possible to add the functionality to Trifacta to have an edit history on flows as well as within recipes?

As far as I know, the current error logs for a failed Trifacta job do not tell the user which recipe, which recipe step, and on what data the error was thrown.

This lack of basic information on the Trifacta level makes it hard for a normal user to debug Trifacta jobs. Typically, I will have to work backwards in the flow, attaching and running an output for each recipe until I can find the culprit recipe causing the issues. Then, I will have to disable steps one by one until I find the step that causes the recipe to fail. This is time and resource consuming.

As for the offending data triggering the problem, I still don't know how to get that, and that's actually crucially important for an ongoing issue we're having with Spark execution.

Therefore, I suggest that improved and simplified error logging would be very helpful in fixing problems in the future. Thank you for your consideration.

Users onboarded to Trifacta cannot be deleted from the GUI, only using API. In the GUI users can only be disabled but they still count toward the licensed users. Please allow users to be deleted from the GUI.

We would like to be able to split users into different user groups within the same workspace. Permissions to view and edit outputs, job runs and flows (including for administrators) would be allocated on a user group level (e.g. so one user group can edit flows, the associated job runs and outputs while other groups can only view them). Administrators would be allocated to a particular user group and their admin rights would apply to their own group only.

Parquet has been more performant than publishing to csv and love to have this feature implemented

We use heavily Tibco Data Virtualization server views and web services in our organization.

We need to have official connector supported by Trifacta to connect and fetch data from those.