Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Currently if a user has multiple connections in a workflow that connect to a password-protected source, and that password changes, the user will be locked out of their account by login attempts as Alteryx attempts to validate the connection.
Today I had to manually edit the XML of another user's workflow in order to remove references to their server, so they could correct their password without locking the account for a third time today.
While I understand that aliases are a good workaround to this problem, the issue still has potential to occur.
Having an option to load a workflow in a "SECURE" or "SAFE" mode, where it would not validate a query until runtime, or refreshing the metadata manually, would help to significantly reduce lockouts which would improve the usability of the tool.
I understand that Server and Designer + Scheduler versions have the option to "cancel workflows running longer than X”.
I'd like to see that functionality in the desktop edition as well.
Limit conversion warning allows for a minimum of 1 message. Can we set the minimum to 0 to completely ignore the message?
Perhaps we can allow warning messages a similar function as ERROR messages and allow the designer to Ignore, Warn or Cancel?
ConvError: Imputation (441): Tool #104: No demand: 0.200000000000031 had more precision than a double. Some precision was lost.
ConvError: Summarize (456): Data: 0.360000000004675 had more precision than a double. Some precision was lost.
End: Designer x64: Finished running FP Model - Marquee Crew v3.yxmd in 32.3 seconds with 16 field conversion errors and 4 warnings
Thanks,
Mark
Idea:
I know cache-related ideas have already been posted (cache macros; cache tools), but I would like it if cache were simply built into every tool, similar to the way it is on the Input Tool.
Reasoning:
During workflow development, I'll run the workflow repeatedly, and especially if there is sizeable data or an R tool involved, it can get really time consuming.
Implementation ideas:
I can see where managing cache could be tricky: in a large workflow processing a lot of data, nobody would want to maintain dozens of copies of that data. But there may be ways of just monitoring changes to the workflow in order to know if something needs to be rebuilt or not: e.g. suppose I cache a Predictive Tool, and then make no changes to any tool preceeding it in the workflow... the next time I run, the engine should be able to look at "cache flags" and/or "modified tool flags" to determine where it should start: basically start at the "furthest along cache" that has no "modified tools" preceeding it.
Anyway, just a thought.
I've seen this question before and have run into it myself. I'd like to see a new tool that would allow a developer (of a workflow) to choose a path of logic based upon criteria known only during the execution of a module.
If LEFT INPUT Count of records < 10,000 THEN Path1 (e.g. use a calgary join)
ELSE Path 2 (e.g. use a standard join)
endif
Thanks,
Mark
I just noticed in a workflow I'm looking at, that I derived a column but after a bit of developing, forgot about it, so there it sat, unused. It doesn't hurt anything, but it would be useful if that sort of thing would automatically generate a soft warning on the tool in question: e.g. any item not referenced downstream automatically generates an "Unused variable" warning.
In order to perform audit-trail logging - it would be valuable to have 2 new capabilities
a) environment variables which show the workflow name; filepath; version; run start date and time; etc. For any worklows we build, we need to have a solid audit trail to be SOX compliant, so having this detail available as a data field to write and manipulate is essential
b) A logging component. What would be great is a component that you can drop on a workflow, not connected to anything, which is able to trap the start; end; runtime; version; etc of a workflow; and commit this to any output data format (CSV or ODBC etc). This logging tool would need to be able to capture the full runtime, so it would need to be the last thing that runs (which means it may need to exist in parallel to the main workflow in some way). This is not currently possible with a complex workflow with outputs, because it's not possible to identify when the entire workflow ended; or the runtime (since output tools don't have an onward connector to pass flow-of-control to catch the final end-time)
Again, both of these are necessary to meet audit requirements for workflows and prodcution-quality ETLs for BI data warehouses.
It would be helpful to have the Read Uncommitted listed as a global runtime setting.
Most of the workflows I design need this set, so rather than risk forgetting to click this option on one of my inputs it would be beneficial as a global setting.
For example: the user would be able to set specific inputs according to their need and the check box on the global runtime setting would remain unchecked.
However, if the user checked the box on the global runtime setting for Read Uncommitted then all of the workflow would automatically use an uncommtted read on all of the inputs.
When the user unchecks the global runtime setting for Read Uncommitted, then only the inputs that were set up with this option will remain set up with the read uncommitted.
Please evaluate the option to add 2 new containers:
1. parallel - execute tasks inside in parallel
2. serial - execute tasks in strict order, imposed at design time. In the future the oder of operations could be enforced by parameters or other input conditions at runtime.
Please Give us the capacity to mix and match these 2 containers.
Thank you
Regards,
Cristian.
Hi all,
I was wondering if any of you have achieved "Transaction rollback" type of feature in alteryx.
Following is the usecase:
If a workflow that writes data into multiple outputs (could be relational tables / files) is failed half way through in writing to one of the outputs, is there an option to rollback the partially loaded data & reset the process to the original state (i.e., before the execution of the workflow)? (OR) does this needs to be done programatically?
There is a workflow level property - "Cancel Running Workflow on Error". This stops the execution but doesn't perform rollback.
Thanks,
Sandeep.
The excel driver (.xlsx) converts these values to 0. If you use the legacy excel driver (.xlsx) it brings in the #N/A values. This issue was reported in the community and I am forwarding it to the New Idea as a problem that needs to be addressed on behalf of @JohnDoe.
In case of system crash/ upgrade, transfer of Alteryx license from one system to another system or from one user to another. User should be able to surrender/ borrow/ transfer license from one machine to another. This helps for more flexible use of product.
Preface: I have only used the in-DB tools with Teradata so I am unsure if this applies to other supported databases.
When building a fairly sophisticated workflow using in-DB tools, sometimes the workflow may fail due to the underlying queries running up against CPU / Memory limits. This is most common when doing several joins back to back as Alteryx sends this as one big query with various nested sub queries. When working with datasets in the hundereds of millions and billions of records, this can be extremely taxing for the DB to run as one huge query. (It is possible to get arround this by using in-DB write out to a temporary table as an intermediate step in the workflow)
When a routine does hit a in-DB resource limit and the DB kills the query, it causes Alteryx to immediately fail the workflow run. Any "temporary" tables Alteryx creates are in reality perm tables that Alteryx usually just drops at the end of a successful run. If the run does not end successfully due to hitting a resource limit, these "Temporary" (perm) tables are not dropped. I only noticed this after building out a workflow and running up against a few resource limits, I then started getting database out of space errors. Upon looking into it, I found all the previously created "temporary" tables were still there and taking up many TBs of space.
My proposed solution is for Alteryx's in-DB tools to drop any "temporary" tables it has created when a run ends - regardless of if the entire module finished successfully.
Thanks,
Ryan
Like many of you, I have a lot of modules and macros ... and growing. I keep them fairly organized in different folders and subfolders but sometimes I can't find that particular module I was working on weeks ago.... and I need to get it now. Now I end up doing an advanced search in windows explorer by date and maybe looking for certain keywords.
It would be nice to keep track of them in alteryx - add tags, customer names. depts to modules (meta info tab)?
Maybe a special container/gui with time line would read the meta info tab so you can more easily find that one module/macro
Also, another gui containing tool name tags so you can easily find all module that use that one tool you're looking for