Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
I was setting up a rather large set of repetitive filters and formulas and when I got done, I wanted to select the output of each tool all at once to drag them in to a Union tool. I think it would be great if you could hold the control key to select multiple outputs to drag to the next tool at a given time.
Ability to ‘name’ the point created in the “Create Points” tool.
Instead of sticking a select tool after it to rename it from ‘centroid’ to Starting Location or Store location or whatever.
Parquet is a very fast, efficient and widely used data format, currently only below Parquet compression algorithms are supported and we cannot use Alteryx to read the parquet file that generated by other processes. This limits our usage in Alteryx.
Read support: Snappy and Gzip compression algorithms.
It would be great for Alteryx to support all types of Parquet format so we can maximize the use of Alteryx in data analysis.
for iterative macro, generally it had 2 anchors, one if it is for iterative, and it normally no output (whether got error or not)
it good to have option to remove this anchor when using it in workflow.
so other user no need to identify which one is the True output and which one is just iteration.
additional, if this can apply to input anchor.
(i just built one macro where i don't need the start input, but the input need to be iterate input)
Alteryx gods,
It would make me even happier than I am now if it were possible to tailor the completion messaging in the Interface Designer when an analytic app completes.
Currently, we use rendering etc, but sometimes we simply want to be able to create a bespoke completion message.
My example is as follows:
In the app you have the option to download files, or have them emailed to you. If you choose download, the final display is the render tool with the documents listed, however, if you choose email I want nothing to show but the final window with the message "Please check your email" or something. There may be more than one option, and so being able to dynamically change these messages would be very useful.
Help me Alteryx gods, you're my only hope.
*beep boop boop*
The basic premise is this:
Phantom spacing. Basically something that looks like it has spaces on Excel but is actually formatted as an indentation.
Unfortunately, to read the indentation we will need either a VBA prep or read the XML inside. The latter of which is difficult.
As to VBA, the general steps are to create an indentation formula in order to see the numbers, then go from there. The idea is credited to @clmc9601 as we discussed privately.
As of now, I do not see anyway to do this on Alteryx as a function or even expression. It would be very helpful especially reading trial balances or even Bloomberg outputs as they are formatted with indentation.
Reading indentation from Excel or any other file within Alteryx will be much appreciated, especially in actuarial and finance spaces.
Problem: In certain workflows, it becomes necessary to arrange columns in a specific order for the output. While achieving the desired order for a fixed number of columns is feasible using the select tool, difficulties arise when dealing with dynamic outputs that introduce new columns during each workflow run.
Example: Consider the following scenario: the INPUT data for the select tool includes a set of Question/Answer columns. However, with every run of the workflow, new columns of this type are introduced. The challenge is to ensure that Question N and Answer N columns are grouped together in the OUTPUT dynamically. Unfortunately, this task is not easily accomplished using the current capabilities of Alteryx.
INPUT:
Company | Question 1 | Question 2 | Question 3 | Answer 1 | Answer 2 | Answer 3 |
Contoso | Blah | Bleh | Bly | N | Y | N |
DESIRED OUTPUT:
Company | Question 1 | Answer 1 | Question 2 | Answer 2 | Question 3 | Answer 3 |
Contoso | Blah | N | Bleh | Y | Bly | N |
With Python/Pandas, this problem can be easily resolved by assigning index values to each column and then sorting the columns based on the assigned index:
So, based on the Python solution, if Alteryx could do the same, it would be great. I personally think that if the Dynamic Rename tool could held the Index Info, and the select tool could also held the Sort option, this would work.
Dynamic Rename: Already can hold Description info, could hold Index Info.
Select tool: Could sort by index and hold this info when the workflow is saved.
Hope this all make sense.
Thanks.
When creating a connection using DCM (example being ODBC for SQL) - the process requires an ODBC Data Source Name (see screenshot 1 below).
However, when you use the alias manager (another way to make database connections) - this does allow for DSN-free connections which are essential for large enterprises (see screenshot 2 below).
NOTE: the connection manager screens do have another option - Quick Connect - which seems to allow for DSN-free connections, but this is non-intuitive; and you're asked to type in the name of the driver yourself which seems to be an obvious failure point (especially since the list of all installed drivers can be read straight from the registry)
Please could we change DCM to use the same interfaces / concepts as the alias screens so that all DCM connections can easily be created without requiring an ODBC DSN; and so that DSN-free connections are the default mode of operation?
Screenshot 1: DCM connection:
screenshot 2
cc: @wesley-siu @_PavelP @ToddTarney
We have lots of tools that create new column(s) from the Inputs, e.g., Generate Rows. It'd be very nice if the new column(s) is/are highlighted in the Output. This makes it a lot easier for users when developing the workflow.
When a user wants to use the find nearest to say find the nearest within 200 miles the dropdown stops at 100.
Similar if they want a number in between IE 15 the interface is not intuitive.
While you can just type the number in the interface doesn't look like you are able to.
Simply adding a "Custom" selection at the bottom would make this much more intuitive.
Hi,
Would be helpful to have an Input and Output Tool for ProjectOnline like the SharePoint and OneDrive Tools.
This way we can read the projects in a tabular form and automate our project management tasks.
Thank you.
When you start using DCM - you may have existing canvasses which use regular old connection strings which you want to migrate to DCM.
Currently (in 2023.1.1.123) - when you select "Use Data Connection Manager" - it shreds the configuration of your input tool which makes it difficult to just convert these from an existing connection to a DCM connection
The only way to then make sure that you don't lose any configuration on the tool then is to use the XML editing functionality of the tools and copy across your old configuration.
Could you please add the capability to keep my current tool configuration, but just change from using a regular old connection string to using DCM?
Many thanks
Sean
cc: @wesley-siu @_PavelP
I would like to propose three feature enhancements for the Cross Tab tool under the Transform tool category.
1. Bringing Concat Unique functionality, which is an idea that is currently in Coming Soon status.
2. Adding Start and End in addition to Separator, similar to the Concatenate Properties found in the Summarize tool.
3. Changing the Default Size from 2048 to 1073741823 (max V_WString size). It is common for especially new users to ignore the truncation errors and potentially miss important data that may need to be processed downstream.
I would love to see an option to run only one container without having to disable all others (and tools not in containers).
I've got workflows with MANY different queries/tools each in their own containers and some tools outside of containers. Occasionally I need to run or re-run just one of the containers (usually several times when the datastream contains Crosstab or Transpose tools where some fields/options will not populate until the workflow has previously run). Normally I'd either have to disable all other containers and/or select EVERYTHING that I do not wish to run an add them all to another container that I could then disable. An option to disable everything outside of a specific container would be most welcome and save a lot of time!
Formula Tool --> Functions --> Operators list
The operator titles for the two comment functions are too similar, the difference cannot be determined unless checking the hover text.
Can the title for /* Comment */ be adjusted to make it more clear that it is for block or multi-line usage?
I didn't understand the difference until I saw this post on LinkedIn:
https://www.linkedin.com/feed/update/urn:li:activity:7165816592063266817/
/* Comment */ --> /* Block Comment */ | /* Multi-line Comment */
Hi!
Just thought up a simple improvement to the US Geocoder macro that could potentially speed up the results. I'm doing an analysis on some technician data where they visit the same locations over & over again. I'm doing a full year analysis (200k + records) & the geocoder takes a bit to churn thru that much data. In the case of my data though, it's the same addresses over & over again & the geocoder will go thru each one individually.
What I did in my process & could be added to the macro is to put a unique tool into the process based off address, city, state, zip, then Geocode the reduced list, then simply join back to the original data stream using a join based off the address, city, state, zip fields (or use record id tool to created a unique process id to join off).
In my case, the 200k records were reduced to 25k, which Alteryx completed in under a minute, then joined back so my output was still the 200k records (all geocoded now).
Not everyone will have this many duplicates, but I'd bet most data has a few, & every little bit of time savings helps when management is waiting on the results haha!
Hey all,
At present, if you have an existing canvas and you want to move to a DCM Connection - you are asked something like "this will reset all of your connection details - are you sure". If you have complex queries; or pre+post SQL - then you first have to copy all of this out into Notepad before you can convert to DCM and then reconfigure it all again.
However, if you are not using DCM you can change data sources when you go into Workflow Dependancies without losing your queries etc.
Could we revisit the user experience of changing to or from a DCM connection to eliminate this "start from scratch" phenomenon - if you are converging from an existing SQL ODBC or ODB or SSVB connection to a SQL connection via DCM then it should allow you to make this conversion without losing your current configuration; and the same for any other database type.
cc: @mbarone
Please consider implementing a consistent case-sensitive option for all tools and functions.
To compare string values, including case-sensitivity: This post had a good description of the challenge, but the post has been archived:
For all the time I've used Alteryx, I thought that IF "test" = "TEST" would evaluate to false. Today I realised that isn't the case and I was surprised. I'm very surprised that "equals" performs like it does.
A few existing Ideas request case-sensitivity for individual tools:
Case insensitive option while joining two data sets
https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Ideas/Case-insensitive-option-while-joinin...
Unique tool enhancement - deal with case sensitive data
https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Ideas/Unique-tool-enhancement-deal-with-ca...
This new Idea requests system-wide consideration for case-sensitivity, for all tools and functions.
Current state:
These tools and functions are case-sensitive:
These tools and functions are NOT case-sensitive:
These tools and functions can be either case-sensitive or NOT case-sensitive, depending on the options used:
Current Challenges:
How do we easily identify Lower Case, Upper Case, Mixed Case?
How do we easily compare strings for equality, using case sensitivity?
Request:
Ensure all tools and functions include an option to ignore or consider Case
Create new functions for IsUpperCase, IsLowerCase, IsMixedCase
Create a new function for IsEqual, with an option to ignore or consider Case
See attached workflow, which
As an international organization we deal with clients in multiple-countries.
Name matches for names including Chinese characters generate a unicode conversation warning and are excluded from the fuzzy match.
It would be good if fuzzy match could be enhanced to handle Chinese characters.
Most of the time I don't want/need the column that I parsed. Provide a check box for if you want the root column output.
User | Likes Count |
---|---|
32 | |
5 | |
4 | |
3 | |
2 |