Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
During the design phase, we make some experimentations and create tables with Alteryx.
But, sometimes, after this phase or after a mistake, we need to drop those tables.
We know that it's possible to write a drop table statement in Pre-SQL or Post-SQL but it requires SQL skills and it could be done only if you write in a table.
It will be great if we could drop a table directly in the Query builder of the Input tool by making a right click on the table in the discovery tree.
Extension : It also be great to have the same thing in the HDFS browse.
I feel like I must be missing something, but saw a similar suggestion for TDE outputs, so maybe this really doesn't currently exist. We sometimes add descriptions to fields we create, and some inputs come with descriptions, but we can't seem to get them into the final database using the Output tool. Can there be a checkbox to persist the metadata along with the data when writing to a database?
We now have the ability to output to an ESRI File Geodatabase, which is great, but it only allows you to output it to the WGS84 coordinate system. I would like to have the same functionality to export it to other projections or coordinate systems similar to the ESRI Shapefile or ESRI Personal Geodatabase output tools (we specifically need NAD83 but I'm sure others would like other options as well).
It would be great if the "fields from connected tool" option pulled fresh data at runtime when used in the gallery and pulling data from non-interface tools. The external source option doesn't have many settings (i.e. I can just point to one file), whereas the possibilities would be endless if I could use the full suite of tools to create a data set, at runtime, to pass to the list box/dropdown.
I want a feature to enable join by custom conditions. Currently, in Join tool, allowed condition is only equality of specific fields and specific position, however, in SQL, we can join data by much more flexible conditions like;
SELECT TableA.id FROM TableA INNER JOIN TableB ON TableA.id=TableB.id and TableA.value > TableB.value
Of course, my idea can be easily realized by using combination of Appendix Field + Filter tool, but I meant to say is that Appendix-Fields is quite expensive operation in calculation cost, and it would generate many unnecessary records, which is annoying us in case of handling a huge dataset.
I suppose this kind of flexible conditions can be specified by using expression editor, thereby configuration window of this feature would look like the below image; Adding one more radio button option, and expression editor similar to one used in Filter tool.
Any positive/negative feedback on my idea would be appreciated. Thank you for your attention!
When converting data types while In-DB, it would be really helpful if I could change the data type with the "Select In-DB" tool in a similar manner to the "Select" tool. Currently, we are having to use the "Formula In-DB" tool in order to create a "Cast" Statement.
As a designer, I need to output data only when no data quality errors are encountered within a workflow. I suppose that I wouldn't want to see any errors, but if I am writing multiple output files and errors are encountered during the output processes (e.g. #3 of 4 fails), then I'm kind of out of luck. So let's focus on data quality. If Nulls are encountered in "Actual" data or unjoined records are found or dates are out of range, you name the issue, I don't want to output any data to specific output tools. Work-arounds exist. I can output to a staging file and conditionally schedule or use a conditional runner macro to output to the production data. But what I really want to do is to stop an output tool from receiving any data to output.
Today I handle this by counting error records that would be caught by a TEST tool and appending the count of these bad records to the data that would go to output(s). I filter for IsNull([Count]) and only when 0 ERRORS are found by the test tool, can data be output. Otherwise null records are received by the output tool and it quietly makes no changes.
My ask is to configure an output tool to be disabled if ERRORs exist. That means that the LAST thing to happen in the execution of a workflow will be the output processes. They will all be blocking tools and can't happen until there are no tools left to run except for the outputs (configured as blocked). Maybe this is a big ask.
There would be great usefulness in having event triggers in 2 different places:
- Similar to Informatica - it would be useful to have event triggers for workflow - specifically "trigger when file arrives" or "trigger when value exceeds X"
- It would be also useful to have an event trigger component with an input so that we can use semaphore type flags to control sequencing in complex sets of flows. For example:
- When the ETL is done - mark the "Completed" flag as true
- The reporting job is running, waiting for a completed flag to complete
Overall, it would be useful for Alteryx to have event-driven triggers.
Would love to see a tool that allows you to find the Top N or Bottom N% etc. using a single tool, rather than the current common practices of using 2-3 tools to accomplish this simple task. It's possible some/all of this functionality could be added by simply expanding the current Sample tool to include more options, or at least mirroring the configuration of the Sample Tool in the creation of a new "Top/Bottom Tool."
For example, let's say I wanted to find the top 5 student grades, and then compare all scores to those top 5 grades. I would currently need to do something along the lines of Sort descending (and/or Summarize Tool, if grouping is needed) + Sample Tool (First N Records) + Join the results back to the data. That's anywhere from 3-4 tools to accomplish a simple task that could potentially be done with 1-2.
I'm envisioning this working somewhat like the Top/Bottom rules in Excel Conditional Formatting (see below), and similar to some of the existing options in the Sample Tool (also see below). For example, rather than only being able to select the First N Records in the Sample Tool, I could indicate that I want to select the Top N Records, or the Bottom N% Records. This would prevent the additional step of having to group/sort your data before using the Sample Tool, especially in cases where you're then having to put your records back into their original order rather than leaving them in their grouped/sorted state. You'd still want to have the option of choosing grouping fields if desired. You would also need to have a drop-down field to indicate which field to apply the "Top/Bottom rules" to.
A list of potential "Top/Bottom" options that I believe would be great additions include:
The value added with just the options above would be huge in helping to streamline workflows and reduce unnecessary tools on the canvas.
Hi to all,
I have seen one or two posts requesting ability to total up rows and/or columns of numbers, however this idea also requests the ability to subtotal data by a field and also produce an overall total.
This could be an extension to existing tools such as 'Summarise' and 'Cross Tab' or could be a stand alone tool. Desired output of using a tool like this would produce something like this:
This would be incredibly useful for building reports within Alteryx as well as analysing the data, and cut down the amount of tools currently required to produce this. I have seen a third party tool which does some of this but this adds the ability to subtotal.
thanks - Roger
Hey all,
I would love to be able to have an interface tool that allows a user to search through drop down values (when there are more than 100 or so) similar to autocomplete. It would be helpful as a multiselect or single select drop down. I have inserted a very poorly mocked up picture below. It would essentially be a modified version of the drop down as all the values would be in the tool, but the user could type to find what they are looking for.
Currently the cross tab tool automatically sorts alphabetically by the "New Column Headers" field. Often times I have to output data with dates across the columns and therefore have to do a cross tab to achieve this. The problem is when I have the dates formatted with month names, the crosstab automatically sorts it in alphabetical order instead of date order (i.e. Apr, Aug, Dec, etc vs Jan, Feb, Mar). To get around this issue, I have to use a dynamic rename tool. It would be great if there was a way to choose the order of the crosstab (i.e. in the order of the data, crosstab, another field, etc.).
Hi,
With multiple Workflows open, I'd like to be able to grab one of the Workflow tabs and drag it out on to the desktop. This act would then cause a new Alteryx Window to open up with the Workflow that was pulled out. Just like when you have multiple tabs open in I.E. and you drag a tab out and drop it on the desktop - you end up getting another I.E. opened up and the tab you dragged out is in the newly opened I.E.
This would be handy because I'm often wanting to copy/paste tools, formulas, etc. and it would be nice to do that w/o flipping from one tab to another.
I know I can right-click and open another Alteryx but when opening several - they all open in the same one.
Thanks,
Brad
The Undo button in Alteryx has saved me many times! Unfortunately, I never know what all was "undone" when I click the button. It would be nice to update the Undo process in 2 ways:
Hi Everyone,
Many workflows I work with along with those of my colleagues, use big databases in order to get some data. After a few steps down stream and testing, we normally just add an output and then open up that data in a new workflow to save time running the original workflow. Not that this is much of a burden, but I am used to copying and pasting tools from workflow A to workflow B, but you can't do that with the output, because in workflow B the output needs to be converted to an input. I just think it would be a cool added feature if possible. Anyone else agree?
Thank you,
Justin
When bringing data together it is often needed to assign a source to the data. Generally this happens when you union data and need to know things later about the data for context. It would save time to generate a source field that is assigned based upon the input connections of the union tool. Perhaps when unioning data you can assign a name to each input stream?
With the amount of users that use the publish to tableau server macros to automate workflows into Tableau, I think its about time we had a native tool that publishes to Tableau instead of the rather painful exercise of figuring out which version of the macro we are using and what version of Tableau Server we are publishing to. The current process is not efficient and frustrating when the server changes on both the Tableau and Alteryx side.
Please extend the Workflow Dependencies functionality to include dependencies of used macros in the worflow too. Currenctly macros are simply marked as dependencies by themselves, but the underlying dependencies (e.g. data sources) of these macros are not included.
We have a large ETL process developed with Alteryx that applies several layers of custom and complex macros and several data sources referenced using aliases. Currently the process is deployed locally (non-server) and executed ad-hoc, but will be moved to the server platform at some point.
Recently I had to prep an employee for running the process. This requires creating aliases and associated connections and making sure that access to needed network locations is in place (storing macros, temp files, etc.). Hence I needed to identify all aliases and components/macros used. As everything is wrapped nicely by a single workflow, I hoped that the workflow dependencies functionality would cover dependencies in the macro nodes within, but unfortunately it didn't and I had to look through the dependencies of 10-15 macros.
Hi! I noticed that there is currently no way to use the debug function when working on an analytic app workflow that contains control containers. I'm running 2024.1 and I use the debug feature in my workflows that currently do not have control containers for me to troubleshoot when data changes in a dynamic workflow. Currently, when running in test mode, I have no way to review the data step by step in the flow when selected dynamically through the interface apps. I can only view the final output and make tweaks.
Benutzer | Anzahl |
---|---|
32 | |
6 | |
5 | |
3 | |
3 |