This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
This is a QoL-request, and I love me some QoL-updates!
While I'm developing I often need the output of a workflow as input for the next phase of my development. For example: an API run returns job location, status, and authentication ids. I want to use these in a new workflow to start experimenting what'll work best. Because of the experimenting part, I always do this in a new workflow and not cache and continue in my main flow.
Writing a temporary output file always feels like unnescesary steps, and tbh I don't want to write a file for a step that'll be gone before it reaches production. Esp if there is sensitive information in it.
I often need to create a record ID that automatically increments but grouped by a specific field. I currently do it using the Multi-Row Formula tool doing [Field-1:ID]+1 because there is no group by option in the Record ID tool.
Also, sometimes I need to start at 0 but the Multi-Row Formula tool doesn't allow this so I have to use a Formula tool right after to subtract 1.
So adding a group by option to the Record ID tool would allow the user not to use the multi-row formula to do this and to start at any value wanted.
As each version of Alteryx is rolled out, it would be much easier for our users and admin team to validate the new version, if Alteryx allowed parallel installs of many different versions of the software.
So - our team is currently on 11.3 - if we could roll out 11.5 in parallel then we could very easily allow users to revert to 11.3 if there are issues, or else remove 11.3 after 2-3 weeks if no issues.
When creating a workflow I generally open a "TEMPLATE" first and then immediately save it to the "NEW WORKFLOW NAME". My template includes all my preferences that aren't set naturally within the user settings and won't get RESET by them either. It has a comment box and containers as well as logos and copyrights. It would be nice to have ready access to this feature. Maybe others have standards that they want applied to all users and their workflows too.
I'm really liking the new assisted modelling capabilities released in 2020.2, but it should not error if the data contains: spatial, blob, date, datetime, or datetime types.
This is essentially telling the user to add an extra step of adding a select before the assisted modelling tool and then a join after the models. I think the tool should be able to read in and through these field types (especially dates) and just not use them in any of the modelling.
An even better enhancement would be to transform date as part of the assisted modelling into something usable for the modelling (season, month, day of week, etc.)
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Can we have a tool to optimize another tool's configuration based on an output target? For example optimize the fuzzy logic setup to find the optimal tool configuration that yields the best matching score for a given data set.
During my trial of assisted modelling, I've enjoyed how well guided the process is, however, I've come across one area for improvement that would help those (including myself) overcome any hurdles when getting started.
When I ran my first model, I was presented with an error stating that certain fields had classes in the validation dataset that were not present in the Training dataset.
Upon investigation (and the Alteryx Community!) I discovered that this was due to a step in the One Hot Encoding tool.
Basically, the Default setting is for all fields to be set to error under the step for dealing with values not present in the training dataset, but there is an option to ignore these scenarios.
Add an additional step to Assisted Modelling that gives the user the option to Ignore / Error as they see fit.
If this were to be implemented then it would remove the only barrier I could find in Assisted Modelling.
Hope this is useful and happy to provide further context / details if needed.
Assisted modeling is a great idea but right now it's a bit unflexible.
IMHO the greatest strength is the semi-automated transform tool, which would extremely helpful on its own.
What would be great is the possibility of using the features without having to go through the assisted modeling wizard but as single tools with minimal configuration, so that it could be used as an automated system for quickly choosing variables.
This way we could have a pretty much perfect rapid prototyping tool for machine learning tasks, leaving more freedom in modeling and enabling less skilled analysts in easily finding on which variables they should focus.