Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.

Alteryx Analytics Cloud Product Ideas

Share your Alteryx Analytics Cloud product ideas, including Designer Cloud, Intelligence Suite and more - we're listening!

In order to monitor the status of the plan that has been running several different flows inside, in my case it is around 300, I send the HTTP request to Datadog to display the result of failed and success on a dashboard. The problem is, DATADOG understands only epoch timestamp and not the datetime value. Right now we cannot convert the timestamp into epoch. I was thinking of approaching this problem in the following ways:

1) Having a pre-request script

2) Creating dynamic parameters in Dataprep instead of using a fixed value, that can be used further in the HTTP request body

3) This is just the turnaround - Creating a table that stores the flow name and timestamp in it, and we are supposed to use this table in a plan every time we are running a flow. But this is not the right way. It will work but it is waste of time as we will end up creating separate tables like this one for each flow.

My use case is that by looking at the target table data, i need to have a column which will indicate which flow has loaded data into that target table. this will be useful for bug fix and tracing back data issues to a flow.

Rgt now we are hardcoding this value as a new column in the recipe step, but if some developer changes the flow name he/she has to manually change the recipe step to reflect the flow name, instead if we can have a dynamic flow name like we have $Filepath for filepath on similar fashion it will be useful.

We need the full steps on GCP Dataprep and GCP to allow us to run scheduled jobs as a true service account (not a user account) and not require authentication of the owning user account (which is timing out in the night due to 16 hour policy for users we have)

So when we schedule a job we should be able to choose a true technical account to "run the job as".

We have an issue as our AD users are synchronised from on premise and a 16 hour timeout policy is applied to each user so any job scheduled with a user will fail after 16 hours and job will be disabled . There is no way for us at our company to sync ad users to GCP IAM without this policy from on premise so we need to be able to run with Service Account.

Have an option when scheduling jobs and if they fail to restart after X minutes. Most of the time when I have a job failure and rerun, it completes fine.

When I export a flow that contains a reference dataset, the name of the JSON file downloaded doesn't match the name of the flow that was exported. Instead it matches the name of the reference dataset inside the flow. I would like to change this so that a flow always keeps its original name when exported from Trifacta.

Allow a connection to a geo coding system, like USPS or Google, that allow you to join and run a demographics dataset through to have longitude and latitude added the output for mapping. I can see a lot of uses for this and especially in the Marketing and Advertising sector.

Being able to Publish outputs directly to Google Sheets would be a major benefit for Sheets users.

We need the ability the create folders underneath the plans. We can create folders underneath flows, but not underneath plans. Additionally, having the ability to create sub folders inside of these parent flow and plan folders is needed. Hard to organize flows and plans without the ability to put them in categories (folders) and subcategories (sub folders) when you approach hundreds of plans and flows.

I often receive data sets which have rows above the column headers that I don't need. When importing the data set, there is a dropdown on the edit menu to "make the first row a column header". However, I would like for this dropdown to include an option to for example, "make row 20 the column header and delete all preceding rows". This would allow me to import the data already with column headers. When dealing with one dataset, I can always choose any row to make it the column headers, but when you have to join 20 similar datasets, it is not possible to do the same. Not sure if my idea is clear (lol), but it seems like it's something that could be easily incorporated into the tool. Thanks!

When using an SQL Statement with a WITH Query Expression I am getting the following error: No select statement found. I was told that WITH statements are currently not supported at the moment.

Why this should be changed:

  1. WITH statements are very important to structure long and complex SQL scripts and reducing heavily nested (unreadable) SQL scripts.
  2. We have a lot of scripts that we want to migrate, but we are stuck as it would take too much time and effort to transform the script. Same for moving logic to Dataprep recipes.


Best regards

Marcel



Details about the syntax:

https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax

Related customer questions

  • https://community.trifacta.com/s/question/0D53j00006OatIdCAJ/is-it-possible-to-use-cte-common-table-expressions-when-importing-a-data-set-using-custom-sql-i-am-specifically-trying-to-use-with-statements


Why I must open all the recipes to reload each sample ?

For exemple :

I make flows with many recipes (between 60 and 100 - It's a real case for me).

On monday, I make a lot of modifications on "data cleaning" at the start of the data wrangling chain !

On tuesday, when i try to open others recipes, I've a warning message "your sample need to be updated" !!!

=> If I had a buttun "update all sample of the flow", I would run it on Monday before sleeping and Tuesday, i could work with smile !


PS : Sorry for my bad English, I'm a French user :-)

When you select a column for apply function or transformation, the methods to select columns are :

  • Multiple
  • All
  • Range
  • Advanced

But this is not possible to select column with a "RegEx math" method on the name of the columns.

It would be much easier!

 Case: 00027615 - created the case for our issue but came to know that functionality is not present

We had OAuth login issue when trying to set up with SNOWFLAKE as we use OKTA as our IDP for SNOWFLAKE.

We want our users to create their own SNOWFLAKE connector using their personal credentials through IDP which will enforce their role in SNOWFLAKE so they can see only the schema's which they are allowed to see.

We can not create generic connector because it will provide more data access then user needed and involve PII too so we want to utilize their snowflake functional roles to restrict it.

Its a really good use case for anyone using snowflake with IDP and have the RBAC set up with SNOWFLAKE.

Read data from Google Drive.

As of now the once a user deletes the flow, the flow will not be visible to anyone, except in the database. But the flow is soft deleted in the database. So can enable the option for admins to see all the deleted flows and recover those flows if required, so that in case some one deletes the flow by mistake then admins can retrieve it by recover option. This has to be an option by check box, where they can recover those flows all at once if it is a folder. This option can also be given to folder recovery where they can recover all the flows in the folder.

It will be nice Trifacta to be able to export files in CDM format (Common Data Model) to ADLS gen2 so that they are fed automatically in PowerBI for reporting purposes

Please allow connections to be created from Trifacta to SharePoint online using SSO authentication, just like for Azure SQL/DWH.

Allow for more then 1 job to be deleted at a time.

The ability to apply various interpolation methods (cspline, linear, etc.) between sorted columns of integers.

Allow functionality in app for customizing support page, users to be able to contact our team when there is an issue with the application, page to show our email address, not Trifacta support email address