Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Analytics Cloud Product Ideas

Share your Alteryx Analytics Cloud product ideas, including Designer Cloud, Intelligence Suite and more - we're listening!

In a large enterprise, one of the perennial challenges is to manage changes to data over time.    The industry is moving towards a shift-left philosophy to handle this by starting to think about data contracts and contract testing.

 

What does this mean for Alteryx?

Well - if you look at the strategy that DBT is following, each workflow has a defined entry data contract, and a defined exit data contract - so that if anything changes you can immediately tell if you are going to break someone else's work.    This also sets you up for lineage in an important way - if you build in field and data-set level lineage from day zero - then you can look across an enterprise and answer questions like:

- If I remove this field - who will I impact downstream (e.g. other Alteryx users; Tableau dashboards etc)

- Who is using my data - so that I can talk to them about their data needs?    Who do I need to tell if my workflow fails

- has someone upstream of me changed their flow in a way that breaks me?

- Where does this field come from - looking across a series of flows and transformations (critical for regulatory requirements).

 

This kind of thinking is hard to add in afterwards - so it would be good to build this into the product in these early days so that this becomes a key foundational piece.

 

I often receive data sets which have rows above the column headers that I don't need. When importing the data set, there is a dropdown on the edit menu to "make the first row a column header". However, I would like for this dropdown to include an option to for example, "make row 20 the column header and delete all preceding rows". This would allow me to import the data already with column headers. When dealing with one dataset, I can always choose any row to make it the column headers, but when you have to join 20 similar datasets, it is not possible to do the same. Not sure if my idea is clear (lol), but it seems like it's something that could be easily incorporated into the tool. Thanks!

We would strongly like the ability to be able to edit datasets, created with custom SQL that have been shared with us. We think of Trifacta in part as a shared development space so if 1 users needs to make an update to a dataset but wasn't originally the owner - this slows down our workflow considerably.

The ability to apply various interpolation methods (cspline, linear, etc.) between sorted columns of integers.

Use a linked datasets created by GCP Analytic Hub as data source in DataPrep. Detailed informations in link below:

Can I use linked dataset (created by Analytic Hub in GCP) to build flows in DataPrep? (trifacta.com)

 Case: 00027615 - created the case for our issue but came to know that functionality is not present

We had OAuth login issue when trying to set up with SNOWFLAKE as we use OKTA as our IDP for SNOWFLAKE.

We want our users to create their own SNOWFLAKE connector using their personal credentials through IDP which will enforce their role in SNOWFLAKE so they can see only the schema's which they are allowed to see.

We can not create generic connector because it will provide more data access then user needed and involve PII too so we want to utilize their snowflake functional roles to restrict it.

Its a really good use case for anyone using snowflake with IDP and have the RBAC set up with SNOWFLAKE.

Allow functionality in app for customizing support page, users to be able to contact our team when there is an issue with the application, page to show our email address, not Trifacta support email address

We need a custom viewer role so that user is able only to use connections shared to him, but not re-share those connections to others. In our case, admin will set up the connections for users and they will just use them. Users should not be able to create or share connections. This will improve the connection security and access to data.

It will be nice Trifacta to be able to export files in CDM format (Common Data Model) to ADLS gen2 so that they are fed automatically in PowerBI for reporting purposes

Please allow connections to be created from Trifacta to SharePoint online using SSO authentication, just like for Azure SQL/DWH.

It would be great if you can expand the metadata selection to not be limited to 2 elements (row number and file path) but could potentially add the date timestamp (e.g. $datecreated) to be used in the recipes.

We need the ability the create folders underneath the plans. We can create folders underneath flows, but not underneath plans. Additionally, having the ability to create sub folders inside of these parent flow and plan folders is needed. Hard to organize flows and plans without the ability to put them in categories (folders) and subcategories (sub folders) when you approach hundreds of plans and flows.

In order to monitor the status of the plan that has been running several different flows inside, in my case it is around 300, I send the HTTP request to Datadog to display the result of failed and success on a dashboard. The problem is, DATADOG understands only epoch timestamp and not the datetime value. Right now we cannot convert the timestamp into epoch. I was thinking of approaching this problem in the following ways:

1) Having a pre-request script

2) Creating dynamic parameters in Dataprep instead of using a fixed value, that can be used further in the HTTP request body

3) This is just the turnaround - Creating a table that stores the flow name and timestamp in it, and we are supposed to use this table in a plan every time we are running a flow. But this is not the right way. It will work but it is waste of time as we will end up creating separate tables like this one for each flow.

Current syntax for WORKDAY function is workday(date1,numDays,[array_holiday]), and the array_holiday can't be a column a table, for example when there's any unpredictable non-trading days like Typhoon weather, we always need to go and change the public holidays in recipe, would prefer if the holidays can be from a column in a table that we can just import and update the table when needed.

Allow for more then 1 job to be deleted at a time.

Current NIST/NSA standard is SHA-2.

As a data wrangler, I would like to be able to hash a column's data using the SHA-256 hashing algorithm.

Create a function in the Formula tool to pull Customer Managed Telemetry data like User ID, Job ID, and Start and End Runtime when running the workflow, instead of navigating to the Jobs tab separately and forgo matching the Job ID.

I'm happy to see there is now a DateTimeNowPrecise function, but you still have to convert this to a datetime with a DateTime tool, then get the min and max of this field to determine runtime.

I started migrating some processes from Desktop to Cloud, but I miss the email customize sending functionality in Cloud. Is it possible?

 

I would like to send a customized email after a successful execution, but I couldn't find the option to customize the email or attach a file.

 

If it is not possible, are there plans to implement this soon?

Hi all,

I'm working through cloud quest 1 using the Alteryx Cloud Native experience - and it seems that there's an opportunity here to get rid of the need to know historical ways of working.

 

The example in this case is that there are 2 dates:

- 16-Jun-01

- 25-Dec-01

 

Now it seems that most people used either a RegEx or a DateTimeParse (which still uses the same specifiers since Alteryx 11 - https://help.alteryx.com/current/en/designer/functions/datetime-functions.html#idp376621).   Additionally there doesn't seem to be a DateTimeParse tool in the platform yet in the CloudNative version.

 

However - given that we're living in the future now - it seems that it should be trivial to use a little AI to recognize that '16-Jun-01' is a date - and allow the user a right-click option to convert to a date.

 

Please can you consider doing a simpler method of cleaning up dates than forcing the user to remember the classic alteryx specifiers for dates since this is SUCH a common need

Using Challenge 438 as an example, the crosstab in designer cloud needs to be dynamic for the new column names to ensure parity with desktop. Unless I'm missing something, you would need to click on the crosstab and manually reset it/add new fields anytime the data changes. Desktop would automatically pick up new fields.

image.png