Alteryx Analytics Cloud Product Ideas

Share your Alteryx Analytics Cloud product ideas, including Designer Cloud, Intelligence Suite and more - we're listening!

Parquet has been more performant than publishing to csv and love to have this feature implemented

Currently is scheduling allowed only on Instance level.

The request is to be able to allow scheduling for particular user, instead for all instance users.

A client is organizationally structured where GCP project administration is separated from tool (Dataprep) administration. Currently, usage charts are only visible in the admin console which is only accessible by project admins. We would like an additional IAM role or an ability to see usage charts without the ability to make project changes.

Currently, when pivoting a specific field into multiple columns, all other fields you want present in the resulting table must be individually added to "row labels".

First off - It is very time consuming when you have a lot of columns to add.

Secondly - When new columns are added in the source data, these new fields are not automatically included. When this happens we need to:

  1. re-sample the data,
  2. make sure the new column is present,
  3. manually add it to the list in the row labels.


When using an automation tool such as Trifacta I would expect that my flow can deal with new columns being added without having to go and fix my flow every time. Adding an option to add "All other fields" or being able to select the Fields to exclude would make this process much smoother and it would ensure that our flow is future proof.

Right now there is no place where team members can collectively create flows and share at one place. If given the option to share the Folders among different members just like we have for flows it will be lot easier. For Example: If there is a folder with 4 different flows, and I share the folder with my team mates they can edit and created new flows over there and can see all the 4 different flows already present. But if out of 4 flows if I share 2 flows with someone, they see the folder but they don't see the flows not shared with them.

We would like to connect Trifacta to a SharePoint file in a folder.

As far as I know, the current error logs for a failed Trifacta job do not tell the user which recipe, which recipe step, and on what data the error was thrown.

This lack of basic information on the Trifacta level makes it hard for a normal user to debug Trifacta jobs. Typically, I will have to work backwards in the flow, attaching and running an output for each recipe until I can find the culprit recipe causing the issues. Then, I will have to disable steps one by one until I find the step that causes the recipe to fail. This is time and resource consuming.

As for the offending data triggering the problem, I still don't know how to get that, and that's actually crucially important for an ongoing issue we're having with Spark execution.

Therefore, I suggest that improved and simplified error logging would be very helpful in fixing problems in the future. Thank you for your consideration.

My use case is that by looking at the target table data, i need to have a column which will indicate which flow has loaded data into that target table. this will be useful for bug fix and tracing back data issues to a flow.

Rgt now we are hardcoding this value as a new column in the recipe step, but if some developer changes the flow name he/she has to manually change the recipe step to reflect the flow name, instead if we can have a dynamic flow name like we have $Filepath for filepath on similar fashion it will be useful.

When I export a flow that contains a reference dataset, the name of the JSON file downloaded doesn't match the name of the flow that was exported. Instead it matches the name of the reference dataset inside the flow. I would like to change this so that a flow always keeps its original name when exported from Trifacta.

We need the full steps on GCP Dataprep and GCP to allow us to run scheduled jobs as a true service account (not a user account) and not require authentication of the owning user account (which is timing out in the night due to 16 hour policy for users we have)

So when we schedule a job we should be able to choose a true technical account to "run the job as".

We have an issue as our AD users are synchronised from on premise and a 16 hour timeout policy is applied to each user so any job scheduled with a user will fail after 16 hours and job will be disabled . There is no way for us at our company to sync ad users to GCP IAM without this policy from on premise so we need to be able to run with Service Account.

Please redirect the user back to the page where the session has expired, instead of redirecting to home once the user re-authenticates.

Current scenario we are seeing that the user will be redirected to home page instead of the page he was in when the session has expired after set time in the config, in my case after 30 mins. ( this is because sometime user goes to a meeting forgets about the page he was working and he has to re-open everything from home page after re-auth )

Have an option when scheduling jobs and if they fail to restart after X minutes. Most of the time when I have a job failure and rerun, it completes fine.

Allow a connection to a geo coding system, like USPS or Google, that allow you to join and run a demographics dataset through to have longitude and latitude added the output for mapping. I can see a lot of uses for this and especially in the Marketing and Advertising sector.

We often use hashing functions like fingerprint in SQL (Big Query) to mark or identify rows that match for specific attributes or to generate UUIDs. I know it's possible to do so by adding UDFs, but it would be more convenient to have a native function.

When using an SQL Statement with a WITH Query Expression I am getting the following error: No select statement found. I was told that WITH statements are currently not supported at the moment.

Why this should be changed:

  1. WITH statements are very important to structure long and complex SQL scripts and reducing heavily nested (unreadable) SQL scripts.
  2. We have a lot of scripts that we want to migrate, but we are stuck as it would take too much time and effort to transform the script. Same for moving logic to Dataprep recipes.


Best regards

Marcel



Details about the syntax:

https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax

Related customer questions

  • https://community.trifacta.com/s/question/0D53j00006OatIdCAJ/is-it-possible-to-use-cte-common-table-expressions-when-importing-a-data-set-using-custom-sql-i-am-specifically-trying-to-use-with-statements


Hi,

I use the import by folder for GCS files to import many files in one time (present and futurs files droped in the same folder). Somtimes, the dataschema of files is not exactly the same for all the files but the columns names are always the same ! I'would like to use the "union by name" for the first union of the many files included in the folder that i've imported. With this function, if the dataschema change in the futur, my importation will be ok whatever !

We could have a screen like "recipe union screen" for the "import with union" (for the inports by folder) to select the columns to import and the type of matching for exemple... 

This is a real issue for me because when the datascema of one file has changed, the scheduled RUNs are KO...

Sorry for my bad English, I'm French :-)

Thanks !

When you select a column for apply function or transformation, the methods to select columns are :

  • Multiple
  • All
  • Range
  • Advanced

But this is not possible to select column with a "RegEx math" method on the name of the columns.

It would be much easier!

Why I must open all the recipes to reload each sample ?

For exemple :

I make flows with many recipes (between 60 and 100 - It's a real case for me).

On monday, I make a lot of modifications on "data cleaning" at the start of the data wrangling chain !

On tuesday, when i try to open others recipes, I've a warning message "your sample need to be updated" !!!

=> If I had a buttun "update all sample of the flow", I would run it on Monday before sleeping and Tuesday, i could work with smile !


PS : Sorry for my bad English, I'm a French user :-)

Read data from Google Drive.

I often receive data sets which have rows above the column headers that I don't need. When importing the data set, there is a dropdown on the edit menu to "make the first row a column header". However, I would like for this dropdown to include an option to for example, "make row 20 the column header and delete all preceding rows". This would allow me to import the data already with column headers. When dealing with one dataset, I can always choose any row to make it the column headers, but when you have to join 20 similar datasets, it is not possible to do the same. Not sure if my idea is clear (lol), but it seems like it's something that could be easily incorporated into the tool. Thanks!