Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Hey Designer Gurus + @NicoleJ ,
Here's a picture of my canvas (running):
I'd like to be able to see COUNTS and PERCENT completion as the workflow is running. In my case, the numbers are BIG and they are prioritized as BACK compared with the lines. In the case of % complete, they obfuscate (fancy term for block) the progress of the tool.
Currently, if I want to watch the water boil, paint dry or the workflow crawl/walk/run I must change the workflow before saving it to maximize the distance between the tools. I'd like to be able to see both the COUNTS and % complete without the added effort. My idea is to have someone at Alteryx figure out an enhancement to this without engaging the likes of @Hollingsworth who'll devise some evil keyboard shortcut.
Cheers,
Mark
Please add support for Databricks' Unity Catalog
Currently, when selecting a Databricks-connection in the “Connect In-DB”-tool, and opening the “Query Builder”, only tables in the catalog named “hive_metastore” are listed. That is, Alteryx submits the following SQL query to Databricks:
Listing tables 'catalog : hive\_metastore, schemaPattern : %, tableTypes : null, tableName : %'
However, with Unity Catalog in Databricks the namespace is three-tier and there may be multiple catalogs (and not just the "hive_metastore" catalog), see https://docs.microsoft.com/en-gb/azure/databricks/lakehouse/data-objects#--what-is-a-catalog
I reached out to Alteryx support, which replied that you currently have a feature request for implementing this change (ID TDCB-4056) and they furthermore suggested that I post here.
Thanks in advance.
Hello!
I'm submiting this idea to put other products into alteryx students program, I think that we (students) should have access to study these products (not only the Intelligence Suite, but Server as well).
We see canvasses every day where dozens fields are brought into a canvas or a macro, but never used - and this just creates slowness for no good benefit.
Given that one of the selling features of Alteryx is the speed of processing - could we look at three improvements to the Alteryx engine & designer:
We've been looking into the phoneHome information that collects usage of Designer in the enterprise, and it looks like this data set (in the UsageReports collection, I believe).
Please can you add the CanvasFilename that was run to this data - we need to be able to surveil the use of Alteryx in our enterprise which is not being done within the server environment, and without the canvas name this becomes tremendously difficult.
Reference:
We frequently have issues where users report slowness from an Alteryx installation on a particular machine; or where a specific tool or package fails to install correctly.
For our admin teams - this becomes a debugging exercise to go through different permutations to understand the cause - and if this is escallated to Alteryx Support, this becomes even tougher.
Could we think about including a basic "Self Diagnostic" in to Alteryx which runs through the basic functionalities of Alteryx with some basic timings; checks that Python is working correctly; checks the memory allocation and temporary disk space - and then either persists this to disk and/or sends to a central environment for analysis?
Given a large deployed environment like ours (over 10 000 seats deployed) - self-checkout-telemetry like this would provide the central team with massive increase in their ability to manage the deployed base; and at the same time signficantly reduce the time to resolve support issues.
There needs to be a way to step into macro a which is component of parent workflow for debugging.
Currently the only way to achieve to debug these is to capture the inputs to the macro from the parent workflow, and then run the amend inputs on the macro. For iterative / batch macros, there is no option to debug at all. This can be tedious, especially if there are a number of inputs, large amounts of data, or you are have nested macros.
There should be an option on the tool representing the macro in the parent workflow to trigger a Debug when running the workflow, this would result in the same behavior when choosing 'Debug' from the interface panel in the macro itself: a new 'debug' workflow is created with the inputs received from the parent workflow.
On iterative / batch macros, which iteration / control parameter value the debug will be triggered on should be required. So if a macro returns an error on the 3 iteration, then the user ticks 'Debug' and Iteration = 3. If it doesn't reach the 3rd iteration, then no debug workflow is created.
(1) I would like to have more text formatting options available in the Comment Tool, such as:
(2) Option to remove or recolor the blue outline of the comment box. (Especially when I have a comment in a color-filled comment box, I would prefer a comment box without a dark outline.)
(3) UX - Add an arrow cursor to indicate resizing functionality
Hello all,
Some Database, including Hive, support natively scheduled queries (yes, the scheduling configuration is inside the database, not through etl/dataprep system). I think this would be an interesting feature for in-db workflow output : you play the worflow once and then only have to run it when it changes, the database do the scheduling.
https://cwiki.apache.org/confluence/display/Hive/Scheduled+Queries
Intro
Executing statements periodically can be usefull in
Best regards,
Simon
Speed up canvas edits - The Create/Remove Space Tool
Usually day two of working with a canvas I realize that I have been a fool, and I come up with a significantly more elegant or simple solution. Moving all of the containers or tools to fit my slick new container is cumbersome and slow. I've created a GIF of a feature several tools have which allows the user to easily move and arrange items on the canvas.
Open source tool used in demo: bpmnJs
Please upgrade the "curl.exe" that are packaged with Designer from 7.15 to 7.55 or greater to allow for -k flags. Also please allow the -k functionality for the Atleryx Download tool.
-k, --insecure
(TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even for server connections otherwise considered insecure.
The server connection is verified by making sure the server's certificate contains the right name and verifies successfully using the cert store.
Regards,
John Colgan
Have you ever used a Join tool with several (or many) Join fields, looked at the the L and R outputs and wondered, why didn't these records join? When there are many columns in your data, this can be a hard question to answer. It would be very handy if Alteryx could somehow report the Field(s) that each record failed to join on (perhaps as an optional added field to the L and R outputs).
Preface: I have only used the in-DB tools with Teradata so I am unsure if this applies to other supported databases.
When building a fairly sophisticated workflow using in-DB tools, sometimes the workflow may fail due to the underlying queries running up against CPU / Memory limits. This is most common when doing several joins back to back as Alteryx sends this as one big query with various nested sub queries. When working with datasets in the hundereds of millions and billions of records, this can be extremely taxing for the DB to run as one huge query. (It is possible to get arround this by using in-DB write out to a temporary table as an intermediate step in the workflow)
When a routine does hit a in-DB resource limit and the DB kills the query, it causes Alteryx to immediately fail the workflow run. Any "temporary" tables Alteryx creates are in reality perm tables that Alteryx usually just drops at the end of a successful run. If the run does not end successfully due to hitting a resource limit, these "Temporary" (perm) tables are not dropped. I only noticed this after building out a workflow and running up against a few resource limits, I then started getting database out of space errors. Upon looking into it, I found all the previously created "temporary" tables were still there and taking up many TBs of space.
My proposed solution is for Alteryx's in-DB tools to drop any "temporary" tables it has created when a run ends - regardless of if the entire module finished successfully.
Thanks,
Ryan
I have been developing and accumulating custom functions over the years and they have proved to be very useful. I am submitting these here. I hope they are found to be beneficial.
Functions included in the attached file include:
To make these functions available in Alteryx, place the attached xml file in the folder C:\Program Files\Alteryx\bin\RuntimeData\FormulaAddIn if you have a standard installation. If the install is non-standard, find the \bin\RuntimeData\FormulaAddIn folder and place the attached xml file there. Ateryx will need a restart for the functions to be available.
When documentin alteryx screen I sometimes hit printscreen and need to paste important matters to Comment tool...
But there is no paste from clipboard 😞
I suggest adding a minor icon that enables not only reading from png but pasting a screen or other image copied directly from memory...
For I need the following setting so I printscreen and capture as is;
Then put that into a PNG or JPG file using paint. And then prepare a comment box with that image in the background...
I believe that in addition to the already suggested idea of having an option to avoid sending one email per record, the attachments capability should be overhauled. Sending multiple attachments in a single email is a common need, but the only Community idea is a partial address of the issue by requesting an ability to use semi-colon separated paths in a single field as the attachment criterion. This doesn't seem to be an optimal method given the potential usefulness of the tool and ease of use considerations.
I think that a full solution should include:
This would be a transformative solution to a common email need, and I think greatly appreciated!
I know that incoming and outgoing connections can be wired and wireless, and that they will highlight when one clicks on a tool. However, it would be very useful to be able to highlight a particular connector in a particular colour (selected from a palette, perhaps, from the drop down window, or from the configuration). This would be especially useful when there are many connectors originating from a single tool.
Thanks
The behavior of an "Overwrite Sheet (Drop)" configuration is such that it breaks formulas (#REF) that point to the overwritten sheet and named ranges that reference the overwritten sheet. This is a bummer because the only way I've found to overcome the issue is to write a script that re-applies the named range. This works, but it greatly raises the barrier to using this tool and in some corporate environments it won't even be possible.
What would probably be a good alternative behavior is to delete the contents of the sheet, rather than the rows/columns/cells of the sheet. I think both probably have valid use cases but my proposed functionality is going to cause fewer issues and be the more popular behavior for most users. I believe there is a google sheets API call for just this kind of behavior...
User | Likes Count |
---|---|
20 | |
5 | |
3 | |
2 | |
2 |