We need the ability the create folders underneath the plans. We can create folders underneath flows, but not underneath plans. Additionally, having the ability to create sub folders inside of these parent flow and plan folders is needed. Hard to organize flows and plans without the ability to put them in categories (folders) and subcategories (sub folders) when you approach hundreds of plans and flows.
I often receive data sets which have rows above the column headers that I don't need. When importing the data set, there is a dropdown on the edit menu to "make the first row a column header". However, I would like for this dropdown to include an option to for example, "make row 20 the column header and delete all preceding rows". This would allow me to import the data already with column headers. When dealing with one dataset, I can always choose any row to make it the column headers, but when you have to join 20 similar datasets, it is not possible to do the same. Not sure if my idea is clear (lol), but it seems like it's something that could be easily incorporated into the tool. Thanks!
Case: 00027615 - created the case for our issue but came to know that functionality is not present
We had OAuth login issue when trying to set up with SNOWFLAKE as we use OKTA as our IDP for SNOWFLAKE.
We want our users to create their own SNOWFLAKE connector using their personal credentials through IDP which will enforce their role in SNOWFLAKE so they can see only the schema's which they are allowed to see.
We can not create generic connector because it will provide more data access then user needed and involve PII too so we want to utilize their snowflake functional roles to restrict it.
Its a really good use case for anyone using snowflake with IDP and have the RBAC set up with SNOWFLAKE.
It will be nice Trifacta to be able to export files in CDM format (Common Data Model) to ADLS gen2 so that they are fed automatically in PowerBI for reporting purposes
Please allow connections to be created from Trifacta to SharePoint online using SSO authentication, just like for Azure SQL/DWH.
Allow for more then 1 job to be deleted at a time.
The ability to apply various interpolation methods (cspline, linear, etc.) between sorted columns of integers.
Allow functionality in app for customizing support page, users to be able to contact our team when there is an issue with the application, page to show our email address, not Trifacta support email address
We need a custom viewer role so that user is able only to use connections shared to him, but not re-share those connections to others. In our case, admin will set up the connections for users and they will just use them. Users should not be able to create or share connections. This will improve the connection security and access to data.
It would be great if you can expand the metadata selection to not be limited to 2 elements (row number and file path) but could potentially add the date timestamp (e.g. $datecreated) to be used in the recipes.
As of now the once a user deletes the flow, the flow will not be visible to anyone, except in the database. But the flow is soft deleted in the database. So can enable the option for admins to see all the deleted flows and recover those flows if required, so that in case some one deletes the flow by mistake then admins can retrieve it by recover option. This has to be an option by check box, where they can recover those flows all at once if it is a folder. This option can also be given to folder recovery where they can recover all the flows in the folder.
Current syntax for WORKDAY function is workday(date1,numDays,[array_holiday]), and the array_holiday can't be a column a table, for example when there's any unpredictable non-trading days like Typhoon weather, we always need to go and change the public holidays in recipe, would prefer if the holidays can be from a column in a table that we can just import and update the table when needed.
Current NIST/NSA standard is SHA-2.
As a data wrangler, I would like to be able to hash a column's data using the SHA-256 hashing algorithm.
Currently there is support for parameterizing variables in custom SQL dataset in Dataprep. However it requires that the tables using this feature have the same table structure. This request is to allow this same functionality but with tables that have different table structures.
Example:
Table A
dev.animals.dogs
name | height | weight
Table B
dev.animals.cats
name| isFriendly
Would like to use a query where we have 1 custom SQL dataset where we just say
SELECT * FROM dev.animals.[typeOfAnimal]
typeOfAnimal being the parameterized variable with a default of dogs.
Hi team,
We would need a page where a user can handle all the email notifications they are receiving from all the flows (success and failure).
Thank you
Create a connector to Mavenlink.
If a flow is shared between multiple editors and someone make changes in it, there should be a way we can see all the changes made to that flow by different users, like creating a trigger that will notify the users about the changes made in the flow by someone as soon as the recipe changes or if we can extract the information about the flow or the job. I have attached the snippet of data that can be useful to us.
We at Grupo Boticário, who currently have 13k Dataprep licenses and close to the official launch internally, have noticed a recurring request for a translation of the tool. Bearing in mind that it will be an enabler for more users to use in their day-to-day work, I would like to formalize and reinforce the importance of our request for translation into Brazilian Portuguese as well as a forecast of this improvement.
We can migrate flows from one environment to other environment using Trifacta APIs.
Export and Import the flow from source to target.
Rename the flow.
Share flow with appropriate user according to environment.
Change the input and output of the flow.
Currently, If there is use-case that the data needs to brought from tables resting in different databases of a same cluster. We have to create n connections for n databases.
But being in same cluster, one should be able to access different databases with a single connection otherwise the connection list gets long and messy.