Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Just ran into this today. I was editing a local file that is referenced in a workflow for input.
When I tried to open the workflow, Alteryx hangs.
When I closed the input file, Alteryx finished loading the workflow.
If the workflow is trying to run, I can understand this behavior but it seems odd when opening the workflow.
Yes, I know, it's weird to have a situation where a decision tree decides that no branches should be created, but it happened, and caused great confusion, panic, and delay among my students.
v1.1 of the Decision Tool does a hard-stop and outputs nothing when this happens, not even the succesfully-created model object while v1.0 of the stool still creates the model ("O") and the report ("R") ... just not the "I" (interactive report). Using the v1.0 version of the tool, I traced the problem down to this call:
dt = renderTree(the.model, tooltipParams = tooltipParams)
Where `renderTree` is part of the `AlteryxRviz` library.
I dug deeper and printed a traceback.
9: stop("dim(X) must have a positive length") 8: apply(prob, 1, max) at <tmp>#5 7: getConfidence(frame) 6: eval(expr, envir, enclos) 5: eval(substitute(list(...)), `_data`, parent.frame()) 4: transform.data.frame(vertices, predicted = attr(fit, "ylevels")[frame$yval], support = frame$yval2[, "nodeprob"], confidence = getConfidence(frame), probs = getProb(frame), counts = getCount(frame)) 3: transform(vertices, predicted = attr(fit, "ylevels")[frame$yval], support = frame$yval2[, "nodeprob"], confidence = getConfidence(frame), probs = getProb(frame), counts = getCount(frame)) 2: getVertices(fit, colpal) 1: renderTree(the.model)
The problem is that `getConfidence` pulls `prob` from the `frame` given to it, and in the case of a model with no branches, `prob` is a list. And dim(<a list>) return null. Ergo explosion.
Toy dataset that triggers the error, sample from the Titanic Kaggle competition (in which my students are competing). Predict "Survived" by "Pclass".
Dear Team
If we are having a heavy Workflow in development phase, consider that we are in the last section of development. Every time when we run the workflow it starts running from the Input Tool. Rather we can have a checkpoint tool where in the data flow will be fixed until the check point and running my work flow will start from that specific check point input.
This reduces my Development time a lot. Please advice on the same.
Thanks in advance.
Regards,
Gowtham Raja S
+91 9787585961
The error message is:
Error: Cross Validation (58): Tool #4: Error in tab + laplace : non-numeric argument to binary operator
This is odd, because I see that there is special code that handles naive bayes models. Seems that the model$laplace parameter is _not_ null by the time it hits `update`. I'm not sure yet what line is triggering the error.
The CrossValidation tool in Alteryx requires that if a union of models is passed in, then all models to be compared must be induced on the same set of predictors. Why is that necessary -- isn't it only comparing prediction performance for the plots, but doing predictions separately? Tool runs fine when I remove that requirement. Theoretically, model performance can be compared using nested cross-validation to choose a set of predictors in a deeper level, and then to assess the model in an upper level. So I don't immediately see an argument for enforcing this requirement.
This is the code in question:
if (!areIdentical(mvars1, mvars2)){ errorMsg <- paste("Models", modelNames[i] , "and", modelNames[i + 1], "were created using different predictor variables.") stopMsg <- "Please ensure all models were created using the same predictors." }
As an aside, why does the CV tool still require Logistic Regression v1.0 instead of v1.1?
And please please please can we get the Model Comparison tool built in to Alteryx, and upgraded to accept v1.1 logistic regression and other things that don't pass `the.formula`. Essential for teaching predictive analytics using Alteryx.
Submitting this idea from a different category as I couldn't find an appropriate category.
I think that it is important to have an offline documentation in either PDF or HTML format (or both) with each major release (or minor release where new features are introduced, such as Alteryx Designer 2025.1.2) for at least the on-prem products (such as Designer and Server) for the use cases or scenarios where either internet connectivity might be limited or non-existent or the user might want to access a part of the documentation quickly (especially in PDF format though searching from the index, where one wouldn't have to navigate between web pages).
It would be great to have the new expression editor in the Interface tools such as Action and Error Message to have the modern expression editor, not only for highlighting and autocomplete but also the "preview result for the first row" (which is the only row when you are writing an expression for an Interface tool.
I think that the addition of this feature is especially necessary because of the "Update Raw XML with Formula" feature, which requires you to clearly be able to see the output of your formula, which in turn usually requires you to first test the XML in a separate workflow with a Text Input tool.
While JSON Parse tool is useful for processing data, it is possible that a name corresponding to a JSON value may contain a period (.) symbol and this can be problematic when you are dealing with a nested JSON value, as Alteryx automatically uses the period character as a delimiter for the JSON_Name column.
I would like to propose an enhancement where the user can select the delimiter the JSON_Name column would be created with, allowing the user to select a character not used in the names for the JSON values, therefore eliminating the need to take extra steps (like finding each name with a period and/or writing specialized RegEx patterns) for dealing with the names that contain a period.
You've built a workflow, but you want GIS to bring it into a systems-based report structure. Alteryx provides the blueprint, but GIS has to now take and translate each step into code.
Rather than have a developer spending valuable time interpreting and recoding the work you have already completed, what if Alteryx had a tool or ready-made output feature that translates your entire workflow into your code of choice (e.g. SQL, Python, VBA, other) that you could just send to GIS and they implement? Kind of like the Macro recording feature in Excel that translates steps into VBA. This would provide a near immediate solution, reducing the days, weeks or even months of additional development work and capacity for an already under pressure GIS.
Ah, but you say there's a catch to this idea. Let's say you need an enhancement to the workflow that has been hardcoded in system. You would need to now submit a ticket to GIS for them to develop and update the code, right?
No, just adjust and iterate in workflow, rerun and submit the updated code to GIS.
We are big fans of the In-Database Tools and use them A LOT to speed up workflows that are dealing with large record counts, joins etc.
This is all fine, within the constraints of the database language, but an annoyance is that the workflow is harder to read, and looks messy and complicated.
A potential solution would be to have the bottom half of the icon all blue as is, but the top half to show the originaling palette for that tool.
ie Connect In-DB - Green/Dark Blue
Filter In-DB - Light Blue/Dark Blue
Join In-DB - Purple/Dark Blue
etc.
in-DB workflows would then look as cool as they are !
Thanks
dan
Hi, currently using the reporting tools, you need to use the render tool to output it, which makes sense.
However, is there away to render an output when using a connector tool e.g. sharepoint output
Additional formatting functionality would be great to see in the Interface Designer.
First off, I want to acknowledge other submitted ideas (vote for them too!):
Both of these are great suggestions and I want to show support of them as well!
To take it another step further from targeted placement or drag/drop... I would also like to see new objects included in the ADD menu. We have Groupings but I'd like to see horizontally split groupings. Meaning, I want the ability to place two Date Inputs next to each other, or short prompts across instead of listed vertically.
Example:
Why this matters: If Alteryx aspires to be a bonofied contender in the Analytic Application space (which I think it is), then we need added functionality that puts a greater emphasis on the user-experience side of things. Because as we know, user acceptance, ease of use, and adoption all depend on a clean presentation for the elements they interract with.
If you agree, your "thumbs up" of support is only one click away!
This would allow for a couple of things:
Set fiscal year for datasource to a new default.
Allow for specific filters on the .tde (We use this for row level security with our datasources).
Thanks
It doesn't seem that Alteryx tests data that isn't on the same hard drive. If my data is located locally, Alteryx works great. If my data is located on a shared server, OMG it takes forever for it to do anything. Simply clicking off a tool onto the canvas can cause a 30-60 second freeze/wait. I literally spend about 1-3hrs per DAY waiting for Alteryx to simply load a tool view. 2024.2 is the worse so far, I have to wait for it to do anything.
It seems Alteryx is getting worse and worse at this, processing non-database data that isn't located locally on hard drive. My idea is to get better at this.
When building join operations in Alteryx, it can be time-consuming to manually scroll through long lists of fields to find the right one to join on, especially when working with large datasets or unfamiliar schemas.
It would be great to have a search-as-you-type filter in the Join tool’s field selection interface. Similar to the existing field selector search, this feature would allow users to start typing a field name and instantly see a filtered list of partial matches. This would significantly speed up the process of identifying and selecting the correct join fields and reduce the risk of selecting incorrect fields due to visual clutter.
In the Table tool, is there a way to edit the bar graph's max and min values using a formula based on table values, rather than a fixed value?
For Example, the automatic selection may choose bounds of 0 and 3324539 to include all values. Still, realistically, 100% needs to be a specific value from the table, with batch reports making this amount dynamic.
In complex Alteryx workflows, it can be hard to navigate between different tool containers - especially when there are dozens spread across in a large canvas.
I would love a feature where users could create a 'table of contents' using clickable text or bookmarks at the top of the workflow. For example, Clicking on a text label named 'Output Calculation' would automatically scroll to view to the Tool Container named 'Output Calculation'.
Suggested Implementation ideas :
This would massively improve usability and workflow navigation, especially for large teams or workflows that are shared across departments.
Inspiration :
Tools like POWER BI and TABLEAU allow similar dashboard or bookmark style navigation . Implementing this in Alteryx designer would be a game changer.
Currently, this option is available in the SharePoint Input tool, which will only output a list of files/items found in the directory specified, which is helpful in cases where we need to add some comparison logic to avoid reading a file that have been processed already (in like a data copy type of scenario). However, this feature was not included in the other connectors (Azure Data Lake File Input, One Drive Input, Box Input, etc.).
Additionally, including an optional input anchor to feed in a list of files to read would also be extremely helpful, similar to a Dynamic Input tool, and avoid the need of creating a Batch macro to perform this operation.
Currently the simulation sampling tool doesn't accept model objects from time series tools as a model input. It would be beneficial if it could so one could run simulations from Time series output (or a new tool is built to offer this functionality)
The global constants, specifically the user-defined ones (within the Workflow configuration) are a great tool for making quick changes. I would love to be able to include the value of these constants directly within a comment: the Comment tool, the captions for Tool & Control Containers, or the Annotation of individual tools. My immediate use-case would be to clearly show what the constants are set to, directly on the canvas, though there are certainly a lot of other uses as well.
Usuario | Cantidad |
---|---|
32 | |
5 | |
4 | |
3 | |
2 |