Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
In the tools that embed the "Rename" option (Select, Append Fields, Join, Join Multiple), copying the new name will copy all the information of the field configuration : tick/untick, original field name, type, size, new name and description.
Renaming the field "Rename_Field"
In my opinion, it should copy only the new name. This would be useful, especially because when you change the name of a field, it isn't automatically changed in subsequent tools, so copying it to replace it in those tools is faster than retyping it every time.
Hello all,
We all know for sure that != is the Alteryx operator for inequality. However, I suggest the implementation of <> as an other operator for inequality. Why ?
<> is a very common operator in most languages/tools such as SQL, Qlik or Tableau. It's by far more intuitive than != and it will help interoperability and copy/paste of expression between tools or from/to in-database mode to/from in-memory mode.
Best regards,
Simon
The Join Tool tells you which records did not match (Left and Right) but it does not tell you what fields it did not match on. This could quickly help the analyst determine which fields they need to look into to determine why there are unmatched records. When joining on 5+ fields it becomes difficult to determine why some records did not match without manually inspecting each record which is time consuming. The column title could be: Unmatched Field(s) and the values should be concatenated separated by commas.
I would like a way to disable all containers within a workflow with a single click. It could be simply disable / enable all or a series of check boxes, one for each container, where you can choose to disable / enable all or a chosen selection.
In large workflows, with many containers, if you want to run a single container while testing it can take a while to scroll up and down the workflow disabling each container in turn.
I am working with complex workflows which use multiple files as input, located on network drives. Input tools are Input Data, Directory, Wildcard Input, Wildcard XLSX Input (from CReW macros).
Regularly, I experience very slow Designer when working on the workflows, and slow progress when running the tools mentioned above, especially when working from home. Switching off Auto Configure did not really help because I the column list sometimes does not converge even after pressing F5 multiple times, and when actively working on workflows, I have to press F5 all the time...
In order to speed up both working on workflows and running the workflows, I would like to propose a function "Cache all File Inputs" which loads and caches all file inputs at once. To achieve this state, I now have Cache and Run workflow once per every file input.
The idea is to have a Run option, where the workflow runs everything up to the selected tool (Like the Cache functionality does).
You select the tool, hit Run Up and the workflows executes everything "before" the selected tool.
That'll make developing much easier, specially when dealing with big workflows and constant changing data.
Hello,
A lot of time, when you have a dataset, you want to know if there is a group of fields that works together. That can help to normalize (like de-joining) your data model for dataviz, performance issue or simplify your analysis.
Exemple
order_id item_id label model_id length color amount
1 | 1 | A | 10 | 15 | Blue | 101 |
2 | 1 | A | 10 | 15 | Blue | 101 |
3 | 2 | B | 10 | 15 | Blue | 101 |
4 | 2 | B | 10 | 15 | Blue | 101 |
5 | 2 | B | 10 | 15 | Blue | 101 |
6 | 3 | C | 20 | 25 | Red | 101 |
7 | 3 | C | 20 | 25 | Red | 101 |
8 | 3 | C | 20 | 25 | Red | 101 |
9 | 4 | D | 20 | 25 | Red | 101 |
10 | 4 | D | 20 | 25 | Red | 101 |
11 | 4 | D | 20 | 25 | Red | 101 |
Here, we could split the table in three :
-order
order_id item_id model_id amount
1 | 1 | 10 | 101,2 |
2 | 1 | 10 | 103 |
3 | 2 | 10 | 104,8 |
4 | 2 | 10 | 106,6 |
5 | 2 | 10 | 108,4 |
6 | 3 | 20 | 110,2 |
7 | 3 | 20 | 112 |
8 | 3 | 20 | 113,8 |
9 | 4 | 20 | 115,6 |
10 | 4 | 20 | 117,4 |
11 | 4 | 20 | 119,2 |
-model
model_id length color
10 | 15 | Blue |
20 | 25 | Red |
-item
item_id label
1 | A |
2 | B |
3 | C |
4 | D |
The tool would take :
-a dataframe in entry
-configuration : ability to select fields.
-output : a table with the recap of groups
<style> </style>
field group field remaining fields
1 | item_id | False |
1 | label | False |
2 | model_id | False |
2 | color | False |
3 | order_id | True |
3 | link to group 1 | True |
3 | link to group 2 | True |
3 | amount | True |
Very important : the non-selected fields (like here, amount), are in the result but all in the "remaining" group.
Algo steps:
1/pre-groups : count distinct of each fields. goal : optimization of algo, to avoid to calculate all pairs
fields that has the same count distinct than the number of rows are automatically excluded and sent to the remaining group
fields that have have the same count distinct are set in the same pre-group
2/ for each group, for each pair of fields,
let's do a distinct of value of the pair
like here
item_id label
1 | A |
2 | B |
3 | C |
4 | D |
if in this table, the count distinct of each field is equal to the number of rows, it's a "pair-group"
here, for the model, you will have
-model_id,length
-model_id,color
-length,color
3/Since a field can only belong to one group, it means model_id,length,color which would first (or second) group, then item_id and label
If a field does not belong to a group, he goes to "remaining group" at the end
in the remaining group, you can add a link to the other group since you don't know which field is the key.
<style> </style>
field group field remaining fields
1 | item_id | False |
1 | label | False |
2 | model_id | False |
2 | length | False |
2 | color | False |
3 | order_id | True |
3 | link to group 1 | True |
3 | link to group 2 | True |
3 | amount | True |
Best regards,
Simon
PS : I have in mind an evolution with links between non-remaining table (like here, the model could be linked to the item as an option)
Hello all,
As of today, you can only (officially) connect to a postgresql through ODBC with the SIMBA driver
help page :
https://help.alteryx.com/current/en/designer/data-sources/postgresql.html#postgresql
You have to download the driver from your license page
However there is a perfectly fine official driver for postgresql here https://www.postgresql.org/ftp/odbc/releases/
I would like Alteryx to support it for several obvious reasons :
1/I don't want several drivers for the same database
2/the simba driver is not supported for last releases of postgresql
3/the simba driver is somehow less robust than the official driver
4/well... it's the official driver and this leads to unecessary between Alteryx admin/users and PG db admin.
Best regards,
Simon
When I make the workflow, the font size on Result window is no problem.
But, when we show the contents of Results window on the presentation or online meeting, the font size is too small.
I want the function which is enlarge the font size. The important point is that the current font size is okay on making workflow and the large font size is only needed on showing to the another people on presentation or online meeting.
One more point to add, it would be helpful to be able to change the font size with Ctrl + mouse wheel.
Push the zoom button:
Hello all,
As of today, you can populate the Drop Down tool in the interface category with a query launched from a in-memory connection. I would really appreciate the ability to use instead an in-db connection.
Why ?
It means managing two connections instead of one, and finding ways to manage it on server for both of them, etc etc.. Simplicity is key.
Best regards,
Simon
Hello all,
When using in-database, all you have in select or formula are the Alteryx field types (V_String, etc..).
However, since you're mostly writing in database, in the end, there is a conversion of Alteryx field types to real SQL field types (like varchar). But how is it done ? As of today, it's a total black box. Some documentation would be appreciated.
Best regards,
Simon
Hi all,
When preparing reports with formatting for my stakeholders. They want these sent straight to sharepoint and this can be achieved via onedrive shortcuts on a laptop. However when sending the workflow for full automation, the server's C drive is not setup with the appropriate shortcuts and it is not allowed by our admin team.
So my request is to have the sharepoint output tool upgraded to push formatted files to sharepoint.
Thank you!
My organization use the SharePoint Files Input and SharePoint Files Output (v2.1.0) and connect with the Client ID, Client Secret, and Tenant ID. After a workflow is saved and scheduled on the server users receive the error "Failed to connect to SharePoint AADSTS700082: The refresh token has expired due to inactivity" every 90 days. My organization is not able to extend the 90 day limit or create non-expiring tokens.
If would be great if the SharePoint connectors could automatically refresh the token when it expires so users don't have to open the workflow and do it manually.
Hello all,
As of today, Alteryx proposes the Intelligence Suite with amazing tools never seen in a data tool, even OCR, image analysis etc.. https://www.alteryx.com/fr/products/intelligence-suite
But... these wonderful tools are part of a paid add-on. And this is what is problematic :
-Alteryx is already an expensive tool. With a huge value but honestly expensive.
-The tools in Intelligence Suite are not common in data tools because you won't use often. And paying for tools you use once or twice in a month is not easy to justify.
So, I suggest to incorpore Intelligence Suite in the core product. The Alteryx users benefit is evident so let's see the Alteryx benefits :
-more user satisfaction
-a simpler catalog
-adding a lot of value to Designer, with the ability to communicate widely on the topic.
-almost no cost : most costumers won't buy the Intelligence Suite anyway.
Best regards,
Simon
Hey all,
I don't know about you, but I have always had trouble hovering the mouse over the Results window pane trying to get the resize icon to appear. It seems like you need surgeon level precision to find the icon! 😷
I love Designer and want to see it be the best it can possibly be. I feel like increasing the clickable/hovering area for this resize would be amazingly helpful!
Just wanted to see if we could get some community momentum going in order to get some developer eyes on this issue. 🙂
Please help by bumping/upvoting this thread!
-K
Migrated this from another thread. Some folks tagged from the original post :)
@cpatrickwk @caltang @afellows @MRod @alexnajm @ericsmalley @MilindG @Prometheus @innovate20
When loading multiple sheets from and Excel with either the Input Data tool or the Dynamic Input Tool, I usually want a field to identify which Sheet the data came from. Currently I have to import the Full Path and then remove everything except the SheetName.
It would be great if there was an option to output she SheetName as a field.
Hello,
Unless you're lucky, your input dataset can have fields with the wrong types. That can lead to several issues such as :
-performance (a string is waaaaaaaay slower than let's say a boolean)
-compliance with master data management
-functional understanding (e.g : if i have a field called "modified" typed as string, I don't know if it contains the modification date, an information about the modification, etc... while if it's is typed as date, I already know it's a date)
-ability to do some type-specific operations (you can't multiply a string or extract a week from a string)
right now, the existing tools have been focused on strings but I think we can do better.
Here a proposition :
entry : a dataframe
configuration :
-selection of fields
or
-selection of field types
-ability to do it on a sample (optional)
Algo :
Alteryx | Byte | bool | only 2 values. 0 and 1 | to be done |
Alteryx | Int16 | bool | only 2 values. 0 and 1 | to be done |
Alteryx | Int16 | Byte | min=>0, max <=255 | to be done |
Alteryx | Int32 | bool | only 2 values. 0 and 1 | to be done |
Alteryx | Int32 | Byte | min>=0, max <=255 | to be done |
Alteryx | Int32 | Int16 | min>=-32,768, max <=32,767 | to be done |
Alteryx | Int64 | bool | only 2 values. 0 and 1 | to be done |
Alteryx | Int64 | Byte | min>=0, max <=255 | to be done |
Alteryx | Int64 | Int16 | min>=-32,768, max <=32,767 | to be done |
Alteryx | Int64 | Int32 | min>=-–2,147,483,648, max <=2,147,483,647 | to be done |
Alteryx | Fixed Decimal | bool | only 2 values. 0 and 1 | to be done |
Alteryx | Fixed Decimal | Byte | No decimal part, min>=0, max <=255 | to be done |
Alteryx | Fixed Decimal | Int16 | No decimal part, min>=-32,768, max <=32,767 | to be done |
Alteryx | Fixed Decimal | Int32 | No decimal part, min>=-–2,147,483,648, max <=2,147,483,647 | to be done |
Alteryx | Fixed Decimal | Int36 | No decimal part, min>=-––9,223,372,036,854,775,808, max <=9,223,372,036,854,775,807 | to be done |
Alteryx | Float | bool | only 2 values. 0 and 1 or 0,-1 | to be done |
Alteryx | Float | Byte | No decimal part, min>=0, max <=255 | to be done |
Alteryx | Float | Int16 | No decimal part, min>=-32,768, max <=32,767 | to be done |
Alteryx | Float | Int32 | No decimal part, min>=-–2,147,483,648, max <=2,147,483,647 | to be done |
Alteryx | Float | Int36 | No decimal part, min>=-––9,223,372,036,854,775,808, max <=9,223,372,036,854,775,807 | to be done |
Alteryx | Float | Fixed Decimal | to be done | to be done |
Alteryx | Double | bool | only 2 values. 0 and 1 or 0,-1 | to be done |
Alteryx | Double | Byte | No decimal part, min>=0, max <=255 | to be done |
Alteryx | Double | Int16 | No decimal part, min>=-32,768, max <=32,767 | to be done |
Alteryx | Double | Int32 | No decimal part, min>=-–2,147,483,648, max <=2,147,483,647 | to be done |
Alteryx | Double | Int36 | No decimal part, min>=-––9,223,372,036,854,775,808, max <=9,223,372,036,854,775,807 | to be done |
Alteryx | Double | Fixed Decimal | to be done | to be done |
Alteryx | Double | Float | when no need for doube precision | to be done |
Alteryx | DateTime | Date | no hours, minutes, seconds | to be done |
Alteryx | String | bool | only 2 values. 0 and 1 or 0,-1 or True/False or TRUE/FALSE or equivalent in some languages such as VRAI/FAUX, Vrai/Faux | to be done |
Alteryx | String | Byte | No decimal part, min>=0, max <=255 | to be done |
Alteryx | String | Int16 | No decimal part, min>=-32,768, max <=32,767 | to be done |
Alteryx | String | Int32 | No decimal part, min>=-–2,147,483,648, max <=2,147,483,647 | to be done |
Alteryx | String | Int36 | No decimal part, min>=-––9,223,372,036,854,775,808, max <=9,223,372,036,854,775,807 | to be done |
Alteryx | String | Fixed Decimal | to be done | to be done |
Alteryx | String | Float | when no need for doube precision | to be done |
Alteryx | String | Double | when need for double precision | to be done |
Alteryx | String | Date | test on several date formats | to be done |
Alteryx | String | Time | test on several time formats | to be done |
Alteryx | String | DateTime | test on several datetime formats | to be done |
Alteryx | WString | bool | only 2 values. 0 and 1 or 0,-1 or True/False or TRUE/FALSE or equivalent in some languages such as VRAI/FAUX, Vrai/Faux | to be done |
Alteryx | WString | Byte | No decimal part, min>=0, max <=255 | to be done |
Alteryx | WString | Int16 | No decimal part, min>=-32,768, max <=32,767 | to be done |
Alteryx | WString | Int32 | No decimal part, min>=-–2,147,483,648, max <=2,147,483,647 | to be done |
Alteryx | WString | Int36 | No decimal part, min>=-––9,223,372,036,854,775,808, max <=9,223,372,036,854,775,807 | to be done |
Alteryx | WString | Fixed Decimal | to be done | to be done |
Alteryx | WString | Float | when no need for doube precision | to be done |
Alteryx | WString | Double | when need for double precision | to be done |
Alteryx | WString | String | Latin-1 character only | to be done |
Alteryx | WString | Date | test on several date formats | to be done |
Alteryx | WString | Time | test on several time formats | to be done |
Alteryx | WString | DateTime | test on several datetime formats | to be done |
Alteryx | V_String | bool | only 2 values. 0 and 1 or 0,-1 or True/False or TRUE/FALSE or equivalent in some languages such as VRAI/FAUX, Vrai/Faux | to be done |
Alteryx | V_String | Byte | No decimal part, min>=0, max <=255 | to be done |
Alteryx | V_String | Int16 | No decimal part, min>=-32,768, max <=32,767 | to be done |
Alteryx | V_String | Int32 | No decimal part, min>=-–2,147,483,648, max <=2,147,483,647 | to be done |
Alteryx | V_String | Int36 | No decimal part, min>=-––9,223,372,036,854,775,808, max <=9,223,372,036,854,775,807 | to be done |
Alteryx | V_String | Fixed Decimal | to be done | to be done |
Alteryx | V_String | Float | when no need for doube precision | to be done |
Alteryx | V_String | Double | when need for double precision | to be done |
Alteryx | V_String | String | Same length | to be done |
Alteryx | V_String | Date | test on several date formats | to be done |
Alteryx | V_String | Time | test on several time formats | to be done |
Alteryx | V_String | DateTime | test on several datetime formats | to be done |
Alteryx | V_WString | bool | only 2 values. 0 and 1 or 0,-1 or True/False or TRUE/FALSE or equivalent in some languages such as VRAI/FAUX, Vrai/Faux | to be done |
Alteryx | V_WString | Byte | No decimal part, min>=0, max <=255 | to be done |
Alteryx | V_WString | Int16 | No decimal part, min>=-32,768, max <=32,767 | to be done |
Alteryx | V_WString | Int32 | No decimal part, min>=-–2,147,483,648, max <=2,147,483,647 | to be done |
Alteryx | V_WString | Int36 | No decimal part, min>=-––9,223,372,036,854,775,808, max <=9,223,372,036,854,775,807 | to be done |
Alteryx | V_WString | Fixed Decimal | to be done | to be done |
Alteryx | V_WString | Float | when no need for doube precision | to be done |
Alteryx | V_WString | Double | when need for double precision | to be done |
Alteryx | V_WString | String | Same length,latin- character only | to be done |
Alteryx | V_WString | WString | Same length | to be done |
Alteryx | V_WString | V_String | latin- character only | to be done |
Alteryx | V_WString | Date | test on several date formats | to be done |
Alteryx | V_WString | Time | test on several time formats | to be done |
Alteryx | V_WString | DateTime | test on several datetime formats | to be done |
The output would be something like that
Field | Input type | Proposition | Conversion |
toto | float | int | formula (with example)/native tool/datetime conversion tool… |
Best regards,
Simon
Hello,
This is a feature I haven't seen in any data prepation/etl. The core feature is to detect the unique key in a dataframe. More than often, you have to deal with a dataset without knowing what's make a row unique. This can lead to misinterpret the data, cartesian product at join and other funny stuff.
How do I imagine that ?
a specific tool in the Data Investigation category
Entry; one dataframe, ability to select fields or check all, ability to specify a max number of field for combination (empty or 0=no max).
Algo : it tests the count distinct every combination of field versus the count of rows
Result : one row by field combination that works. If no result : "no field combination is unique. check for duplicate or need for aggregation upstream".
ex :
order_id line_id amount customer site
1 | 1 | 100 | A | U_250 |
1 | 2 | 12 | A | U_250 |
1 | 3 | 45 | A | U_250 |
2 | 1 | 75 | A | U_250 |
2 | 2 | 12 | A | U_250 |
3 | 1 | 15 | B | U_250 |
4 | 1 | 45 | B | U_251 |
The user will select every field but excluding Amount (he knows that Amount would have no sense in key)
The algo will test the following key
-each separate field
-each combination of two fields
-each combination of three fields
-each combination of four fields
to match the number of row (7)
And gives something like that
choice number of fields field combination
very good | 2 | order_id,line_id |
average | 3 | order_id,line_id, customer |
average | 3 | order_id,line_id, site |
bad | 4 | order_id,line_id, site, customer |
… | … | …. |
Best regards,
Simon