Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
The ARIMA tool provides a ton of valuable information with just a small amount of effort. And the "I" anchor efficiently outputs test results to evaluate the effectiveness of the model.
Annoyingly, in the Browse tool off the "I" anchor, test definitions such as MPE/MAPE/MASE cannot be read when hovering without widening the Configuration window beyond the chart above. Definitions have a wider range than the chart itself. The user has to manually widen and then subsequently unwiden the Configuration window to read.
Idea would be for the tooltips when hovering to be as wide as the chart above. That would avoid repetitive resizing of the Configuration window.
It would be nice to have 2 tier tool annotations.
Tier 1 would be akin to an H1 title and serve as a condensed descriptive title the user creates.
Tier 2 would be analogous to the current annotation option and provide a mechanism to provide specific details regarding the operation being performed by a specific tool (relative to the workflow).
The default might be to show Tier 1 and hide Tier 2. A user would have the option to show all Tier 2 annotations or chose to show select Tier 2 annotations (much like the current annotation show/hide options). An alternative option might be two depict Tier 2 as a hover over tool tip.
Thoughts?
On the canvas, underneath the Run button, there are zoom out and zoom in buttons. It would be lovely if between them existed a number box indicating the current zoom level of the canvas. This would operate much like my web browser, which typically shows 100% unless I have zoomed in or out. Bonus points if the button is clickable to reset to the default zoom level.
Hi Team,
As the formula design that able to stack multiple formula in once. There should have more things there.
1. Error Icon for formula have Error
Can you find which row have error in seconds?
We had to count to find which formula had error!!!!
and how about now?
At least, do have an icon or anything significant thing that let us found it in second and WITHOUT COUNT!!!
2. buttons to expand all/ collapse all.
It was normal that we need to review formula in again in future.
So, you need to click one by by to view all the formulas?
Please consider making the Count Record tool configurable so that users can receive a visual read of results in canvas. This would quickly assist in ensuring a user's ability to verify a workflow is functioning as expected or not. Currently there is no in canvas visual cue tied to the counter. If a user is expecting a certain count result (e.g. zero), the user has to click on the count records function to see if the result meets expectations or not. Users may spend a lot of time checking each stage of a workflow to ensure everything is flowing appropriately. A visual cue of results would reduce that time to check. Outside of runtime errors, there is currently no visual cue to indicate a possible problem or unexpected exception to a filter or other macro.
Two suggestions:
1) Allow users to color code count results in specific count results or ranges. (e.g. if a user is expecting a zero count, allow them to change the color of the counter to red for anything not resulting in zero and green if count is zero). Or allow them to set a color range depending on the count total (e.g. a user may need some visual tolerance indicator with a count under 100 as green, 101-200 as yellow, 201-300 as orange and 300+ as red).
2) Show the actual count in the Count Record macro icon.
Thank you!
Would like to be able to connect to the Stibo STEP system/database as a Data Source. Some people have the Stibo server on-premise while others have it hosted in Amazon (AWS).
Not sure what else I could provide at this point for further details.
Hello Alteryx Gurus -
I've got some workflows that run daily, but there are times, depending on the breaks, wherein I don't get any data from one of my data sources. Which is actually fine, nobody did Job X today. But it makes Alteryx puke out and I get an error message emailed to me. Ultimately, I've got to hop into the rather voluminous log entries to determine if this was a data stream not initialized / was empty error, or something else that I actually need to care about.
That being said, in the coding realm, it is relatively simple to look for specific flavors of exceptions and then just eat them without notifying people. So, why not add something to the runtime / events panel for emailing at error time to allow for ignoring data stream not initialized errors? In this way, I could get notified when there is a real error I need to pay attention to, and not get notified when there is no new data, which isn't really that big a deal.
Thank you for attending my TED talk on enhanced error reporting and exception classification capabilities.
Hello
I have searched the community but haven't found any obvious solutions to this.
When using a cross tab I often find that there shouldn't be any aggregated values and if there is it means there is an issue with my data or workflow.
Therefore I think a useful feature would be an option for the cross tab tool to be able to return an error if it trying to aggregate any values.
I have a work around by using a summarize tool to count the non unique records and then a test to see if there are any duplicates but I think this could be a useful addition to the tool.
Thanks
Hi,
Can we get this list updated?
https://community.alteryx.com/t5/Videos/Video-Training-Index/td-p/45161
I think it is one of the most interesting sources of knowledge on Alteryx community but unfortunately, it wasn't updated for the last year.
It is possible that new sessions will be added here?
I would like to lasso or select multiple tools and have a count of selected tools. Perhaps this count could be in a tool tip or somewhere else.
I would like to propose an idea for the evolution of INPUT TOOLS and OUTPUT TOOLS in relation to their compatibility with DATA CONNECTIONS configured in Settings ALTERYX.
Indeed, it is now possible to create a Data Connection of SQL Bulk Loader (SSVB) type and to use this DATA CONNECTION in an Input Tool. The configuration is possible (Choose the table, the query ...), when you run the Workflows it works and you get the data well.
On the other hand, when we try to click again on the INPUT Tools, there is an error message and we cannot retrieve the contained request because the File Format is unknown.
After analysis with the support, there is a compatibility problem between SSVB and INPUT Tools in ALTERYX because there is no support for SSVB in INPUT in ALTERYX but it is supported in OUTPUT Tool.
My proposal would therefore be that there be a validation made during the configuration of the INPUT Tool and the Output Tool between the DATA Connection chosen and the type of tool used.
Thnaks for your return.
Regards,
Psyrio
I guess it's better if the current column filter feature would cover the entire data set not just the partial results. This would be useful especially if after you run the complicated workflow and you just want to test the data particular nodes in the canvass.
Today the Autofield tool transforms the fields into byte by default when it considers that the content is suitable while we expect text in it and that it can simply be a field not filled in in the context current but which may be later.
The idea would be to be able to choose which type by default to implement on text or empty fields and not the default byte because a byte field is not recognized on a formula using an IN for example which can produce errors in the following workflows.
Hello,
My idea is that the current Download tool does not handle errors and continues its path even if it does not find for example a file in the transmitted URL or if it does not find the hostname it crashes.
In the case of a user with several URLs in a row, this is penalizing.
In the case of downloading files with recording, it still writes a file (thus overwriting the existing file) but which is not openable and is not in the correct format. (BLOCKED file!) Which then causes problems in workflows reading these files.
The idea would be to put a second output to this tool for all the URLs where there was a problem (non-existent hostname, file not found, HTTP KO) and one where it received the expected elements so as not to prejudice the processing. and allow better management of error cases.
Regards,
Bruno
When using Dynamic Input with databases, the Database may be returning errors or other information that the tool cannot parse into a dataset.
It would be great if we could see the 'raw' response from the database somehow, as this might provide insight into why we are not getting the expected results.
If the tool could output an optional error column that has the unparsed response from the database server, it might allow us to debug the problem ourselves.
If the returned data is actually a string response from the database, but one that is flawed in some way, we might still be able to parse data out of it to 'ride over' the error.
Configuration window - Add feature to zoom in or out of the configuration window similar to the canvas. There is alot going on in the Configuration window and it would be helpful (especially for those of us with eyesight challenges) to be able to zoom in/out similar to the Canvas.
Migrate old R based charts and create new statistical charts in the interactive chart tool to provide enhanced statistical charting and visual data exploration capabilities.
This includes:
This these URLs for more examples:
For heavy workflows (e.g. reading massive amount of data, processing it and storing large datasets through in db tools in Cloudera), a random error is sometime generated: 'General error: Unexpected exception has been caught'
This seems due to Kerberos ticket expiration and the related setting may not be modifiable by the Alteryx developer 'especially when GPO). Suggestion is to enhance the indb tools in such a way that they are able to automatically renew the Kerberos ticket like other applications do.
Br, Lookman
Hello,
As of today, there are only few packages that are embedded with Alteryx Python tool. However :
1/Python becomes more and more popular. We will use this tool intensively in the next years
2/Python is based on existing packages. This is the force of the language
3/On Alteryx, adding a package is not that easy : you need to have admin rights and if you want your colleagues to open your workflow, it also means that he has to install it himself. In corporate environments, it means loosing time, several days on a project.
Personnaly, I would Polars, DuckDB.. that are way faster than Panda.
(1) The green banner saying that the workflow has finished running should stay until dismissed
(2) The indicator on the tabs showing which workflows had run should be colour coded (still running / completed without errors / completed with errors)
Thanks!
User | Likes Count |
---|---|
4 | |
3 | |
3 | |
2 | |
2 |