Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
The current Azure Data Lake tool appears to lack the ability to connect if fine grained access is enabled within ADLS using Access Control Lists (ACL)
It does work when Role Based Access Control (RBAC) is used by ACL provides more fine grained control to the enviroment
For example using any of the current auth types: End-User Basic, End-User (advanced) or Service-to-Service if the user has RBAC to ADLS the connector would work
In that scenario though the user would be granted to an entire container which isn't ideal
The ideal authentication would be to the directory level to best control and enable self service data analytic teams to use Alteryx
The existing tool appears to be limited where if don't have access at the container level but only at the directory level then the tool cannot complete the authentication request. This would require the input for the tool to be able to select a container (aka file system name) from the drop down that included the container+ the directory
Access control model for Azure Data Lake Storage Gen2 | Microsoft Docs
Example A
Example B
It is great to see the ability to stage data for bulk loading into Databricks in s3 and ADLS. Previously this only appeared to allow staging in Databricks DBFS.
However the current connector included in Designer 2022.1 has a key gap in functionality with ADLS Gen 2
The only authentication method provided to the ADLS storage is though a shared key.
Shared keys provide access that is
We do not provide users the shared key for the ADLS storage, thus our users cannot take advantage of this new feature.
The preferred method of authentication to ADLS would be
Either of these options can be provided though a service principal with a tenant id, client id and client secret as inputs to the bulk load tool
This request would specifically be to allow the ACL authentication. ACL would help empower our our self service data analysts and data scientists who could have access to a specific container.
For example
storageAccount/Container/directory
The ACL access in this tool would allow the Alteryx tool to follow the same access patterns where fine grained access was provided at the directory level and not at the storage account or container level. This would provide self service analysts and data scientists to use Alteryx as they need within their directory without providing higher level access.
Access control model for Azure Data Lake Storage Gen2 | Microsoft Docs
After multiple years of using Alteryx, The tabbed document feature was left out of 2022.1. This feature allows for a much cleaner canvas for exploring workflow and output data. I view this feature as a basic function of Alteryx, I was surprised to find out that the development team intentionally omitted this function. I really don't want to revert back to older versions but it may be only the way to have a more comfortable feel of Alteryx.
At the moment, at least for Postgres and ODBC connections, the DCM only supports a names DSN that must be installed on each machine running Designer or Server. However, the ODBC admin function is admin only within my company, which makes DCM more trouble than it is worth to use.
Connection strings work well in the workflows, have been implemented on the gallery before, and do not require access to the ODBC admin to implement. Could DCM please be improved to support native connection strings?
The order of the join fields effects the ordering output
For more complex joins it would be nice to have up and down arrows much like the summarise tool:
Often I need to add filters or other tools early on after the workflow is already been mostly built. If a tool connects to one tool I can drag the filter over the connecting line and add the filter seamlessly. However in large workflows there is often this situation:
The Filter will only connect to one of the lines I'm hovering over. If I could connect to all lines simultaneously and drop in the connection to achieve this (would be awesome):
i using dynamic input tool a lot. when reading list of sheet. i will simply put a.xlsx|||Sheet1 (as random file name)
However, when it run in the workflow. it will verify the existing file (A.xslx) instead. it will stop due to error (old file not found)
suggestion:
1. verifying new path (file) and not old path (file) , or
2. option to ignore error
Currently the Filter tool not supporting the multiple variable value check like a = b = c. It would be nice if the filer tool supports multi-variable check in one go.
I've been using the Intelligence Suite to automate building models using assisted modelling, It works great. However If I then output the model to the Python tool I ran into a few issues. The Intelligence Suite relies on the EvalML library so I wanted to use the python tool to use the model created with some of EvalML's other features such as prediction explanations.
What I found was that the intelligence suite for 2021.3.5 uses EvalML version '0.13.2' which is from 2 years ago meaning a lot of the extra features are not available.
It would be great with each new Alteryx intelligence suit the EvalML package also updated.
I learnt Alteryx for the first time nearly 5 years ago, and I guess I've been spoilt with implicit sorts after tools like joins, where if I want to find the top 10 after joining two datasets, I know that data coming out of the join will be sorted. However with how AMP works this implicit sort cannot be relied upon. The solution to this at the moment is to turn on compatibility mode, however...
1) It's a hidden option in the runtime settings, and it can't be turned on default as it's set only at the workflow level
2) I imagine that compatibility mode runs a bit slower, but I don't need implicit sort after every join, cross-tab etc.
So could the effected tools (Engine Compatibility Mode | Alteryx Help) have a tick box within the tool to allow the user to decide at the tool level instead of the canvas level what behaviour they want, and maybe change the name from compatibility mode to "sort my data"?
When a user connects the Input tool to a database, the Tables view lists the tables the user can see. Please add a search feature and an option to export the list.
This is purely aesthetic but it would be great to have a button to auto format tools positions on the canvas. The idea would be similar to a feature many IDE's come with to auto format code so that the indentation is corrected and duplicate return characters are removed.
We currently have the Align and Distribute capabilities which is great. This could be expanded to the entire workflow so that the canvas could look at which tools connect to which and structure their positioning around that. I think it would be a great clean up feature after finishing a workflow.
I recently began using the SharePoint Files v2.0.1 tools to read and write data. The SharePoint Files Output tool allows you to take a sheet or filename from a column but that column is still included in the output. The standard Output Data tool has a "Keep Field in Output" checkbox that allows you to control if the column stays in the XLSX of CSV file. It would be great if this same functionality could be included in the SharePoint Files Output tool.
Hello!
I had found this quirk whilst working on a fairly large workflow, where i had multiple tools cached to keep things quick. I had moved one of the tools on the canvas to a pre-existing container, and it removed the caching on my whole workflow.
Steps to reproduce:
1) setup a super basic workflow (or any workflow):
2) Cache part of the workflow:
3) drag one of the tools (in this case the formula) into the container:
As you can see, the workflow is no longer cached and i have to re-cache it.
This would be a welcome change as that is an unexpected behaviour to me, and so I would imagine others too. A workflow no longer being cached can cost the developer a lot of time (and potential resource, if hitting a Snowflake instance, for example).
Thanks,
TheOC
Once I've built a workflow I often have to go through the process of removing and combining tools such as selects and formula tools which could be simplified to just one tool. It would be great to have an automated feature which could detect groups of tools which could be simplified and then automatically combined them into one step, improving/simplifying my workflow.
If the workflow configuration had a run for 'x' number of iterations option it would make debugging macros a lot easier. My current method consists of copying results, changing inputs and repeat until I find my problem which feels very manual.
Hello,
I see no reason why Insight is deactivated by default... it would be so smarter to make it active. That's all, 5 minutes of a developer.
Best regards,
Simon
When I am working with 2 different versions of Alteryx (e.g., a current version and a beta version), I set different background colors for each version through user settings. This a great because I don't want to accidentally modify a current workflow when beta testing; the canvas color is a clear, but subtle indicator of which version I'm working in.
Similarly, I'd like the option to set a custom canvas color for each workflow. Use case - I have two versions of a workflow, e.g., one production and one in development, both in the same version of Alteryx. I don't want to accidentally modify the production workflow instead of the dev workflow. My current workarounds are to open the workflow in two different windows on separate monitors or to add an obtrusive comment box making the dev version as in development. Neither is a great option. If I could set the canvas for the workflows to different colors, that would reduce the possibility of making this mistake.
My idea is to expand this the custom canvas-coloring functionality to allow users to set a custom canvas color for each workflow.
This is not exactly a new feature but I didn't know where else to send it.
I just received an email from Alteryx and I noticed that the footer is an image and not dynamic.
And there you see that the year is still 2021. A good idea would be to insert a code that would grab the year automatically from the actual date.
Hello all,
As of today, Alteryx proposes the Intelligence Suite with amazing tools never seen in a data tool, even OCR, image analysis etc.. https://www.alteryx.com/fr/products/intelligence-suite
But... these wonderful tools are part of a paid add-on. And this is what is problematic :
-Alteryx is already an expensive tool. With a huge value but honestly expensive.
-The tools in Intelligence Suite are not common in data tools because you won't use often. And paying for tools you use once or twice in a month is not easy to justify.
So, I suggest to incorpore Intelligence Suite in the core product. The Alteryx users benefit is evident so let's see the Alteryx benefits :
-more user satisfaction
-a simpler catalog
-adding a lot of value to Designer, with the ability to communicate widely on the topic.
-almost no cost : most costumers won't buy the Intelligence Suite anyway.
Best regards,
Simon
User | Likes Count |
---|---|
7 | |
4 | |
4 | |
3 | |
3 |