Alert: There is a planned Community maintenance outage October 16th from approximately 10 - 11 PM PST. During this time the Alteryx Community will be inaccessible. Thank you for your understanding!

Alteryx Analytics Cloud Product Ideas

Share your Alteryx Analytics Cloud product ideas, including Designer Cloud, Intelligence Suite and more - we're listening!

We would strongly like the ability to be able to edit datasets, created with custom SQL that have been shared with us. We think of Trifacta in part as a shared development space so if 1 users needs to make an update to a dataset but wasn't originally the owner - this slows down our workflow considerably.

The ability to apply various interpolation methods (cspline, linear, etc.) between sorted columns of integers.

Use a linked datasets created by GCP Analytic Hub as data source in DataPrep. Detailed informations in link below:

Can I use linked dataset (created by Analytic Hub in GCP) to build flows in DataPrep? (trifacta.com)

 Case: 00027615 - created the case for our issue but came to know that functionality is not present

We had OAuth login issue when trying to set up with SNOWFLAKE as we use OKTA as our IDP for SNOWFLAKE.

We want our users to create their own SNOWFLAKE connector using their personal credentials through IDP which will enforce their role in SNOWFLAKE so they can see only the schema's which they are allowed to see.

We can not create generic connector because it will provide more data access then user needed and involve PII too so we want to utilize their snowflake functional roles to restrict it.

Its a really good use case for anyone using snowflake with IDP and have the RBAC set up with SNOWFLAKE.

Allow functionality in app for customizing support page, users to be able to contact our team when there is an issue with the application, page to show our email address, not Trifacta support email address

We need a custom viewer role so that user is able only to use connections shared to him, but not re-share those connections to others. In our case, admin will set up the connections for users and they will just use them. Users should not be able to create or share connections. This will improve the connection security and access to data.

It will be nice Trifacta to be able to export files in CDM format (Common Data Model) to ADLS gen2 so that they are fed automatically in PowerBI for reporting purposes

Please allow connections to be created from Trifacta to SharePoint online using SSO authentication, just like for Azure SQL/DWH.

Being able to Publish outputs directly to Google Sheets would be a major benefit for Sheets users.

It would be great if you can expand the metadata selection to not be limited to 2 elements (row number and file path) but could potentially add the date timestamp (e.g. $datecreated) to be used in the recipes.

We need the ability the create folders underneath the plans. We can create folders underneath flows, but not underneath plans. Additionally, having the ability to create sub folders inside of these parent flow and plan folders is needed. Hard to organize flows and plans without the ability to put them in categories (folders) and subcategories (sub folders) when you approach hundreds of plans and flows.

In order to monitor the status of the plan that has been running several different flows inside, in my case it is around 300, I send the HTTP request to Datadog to display the result of failed and success on a dashboard. The problem is, DATADOG understands only epoch timestamp and not the datetime value. Right now we cannot convert the timestamp into epoch. I was thinking of approaching this problem in the following ways:

1) Having a pre-request script

2) Creating dynamic parameters in Dataprep instead of using a fixed value, that can be used further in the HTTP request body

3) This is just the turnaround - Creating a table that stores the flow name and timestamp in it, and we are supposed to use this table in a plan every time we are running a flow. But this is not the right way. It will work but it is waste of time as we will end up creating separate tables like this one for each flow.

I'm looking for a way to discover which datasets, recipes, or outputs are taking up the most time and resources.

it would also be nice if we were able to view this over time as well.


an example would be sometime like the Unity3d profiler.

https://docs.unity3d.com/uploads/Main/profiler-window-layout.png

this is for a video game engine, but i hope the system can be similar.

in this profiler you can see what resource (ram,cpu, gpu) is being used and by what character/object in your video game.

similarly it would be nice to see what database is being used by what flow in trifacta.

Current syntax for WORKDAY function is workday(date1,numDays,[array_holiday]), and the array_holiday can't be a column a table, for example when there's any unpredictable non-trading days like Typhoon weather, we always need to go and change the public holidays in recipe, would prefer if the holidays can be from a column in a table that we can just import and update the table when needed.

Allow for more then 1 job to be deleted at a time.

Current NIST/NSA standard is SHA-2.

As a data wrangler, I would like to be able to hash a column's data using the SHA-256 hashing algorithm.

I would like the ability to specify a billing project for BigQuery as part of run options. Currently, data queried from BigQuery is associated to the project from which a Dataprep flow is run with no way to change it. For customers we work with in a multi-project environment, they need the flexibility to align queries to specific projects for purposes of cost and usage attribution.

Additionally, for customers on flat-rate BigQuery pricing, a selectable billing project will allow users to move queries to projects under different reservations for workload balancing and/or performance tuning.

We can migrate flows from one environment to other environment using Trifacta APIs.

Export and Import the flow from source to target.

Rename the flow.

Share flow with appropriate user according to environment.

Change the input and output of the flow.

Create a connector to Mavenlink.

We at Grupo Boticário, who currently have 13k Dataprep licenses and close to the official launch internally, have noticed a recurring request for a translation of the tool. Bearing in mind that it will be an enabler for more users to use in their day-to-day work, I would like to formalize and reinforce the importance of our request for translation into Brazilian Portuguese as well as a forecast of this improvement.