Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Hello all,
Like many softwares in the market, Alteryx uses third-party components developed by other teams/providers/entities. This is a good thing since it means standard features for a very low price. However, these components are very regurarly upgraded (usually several times a year) while Alteryx doesn't upgrade it... this leads to lack of features, performance issues, bugs let uncorrected or worse, safety failures.
Among these third-party components :
- CURL (behind Download tool for API) : on Alteryx 7.15 (2006) while the current release is 8.0 (2023)
- Active Query Builder (behind Visual Query Builder) : several years behind
- R : on Alteryx 4.1.3 (march 2022) while the next is 4.3 (april 2023)
- Python : on Alteryx 3.8.5 (2020) whil the current is 3.10 (april 2023)
-etc, etc....
-
of course, you can't upgrade each time but once a year seems a minimum...
Best regards,
Simon
We have discussed on several occasions and in different forums, about the importance of having or providing Alteryx with order of execution control, conditional executions, design patterns and even orchestration.
I presented this idea some time ago, but someone asked me if it was posted, and since it was not, I’m putting it here so you can give some feedback on it.
The basic concept behind this idea is to allow us (users) to have:
This approach involves some functionalities that are already within the product (like exploiting Filtering logic, loading & saving, caching, blocking among others), exposed within a Tool Container with enhanced attributes, like this example:
The approach is to extend Tool Container’s attributes.
This proposition uses actual functionalities we already have in Designer.
So, basically, the Tool Container gets ‘superpowers’, with the addition of some capabilities like: Accepting input data, saving the contents within the container (to create a design pattern, or very commonly used sequence of tools chained together), output data, run the contents of the tools included in the container, etc.), plus a configuration screen like:
This should end a brief introduction to the idea, but taking it a little further, it will allow even to have something like an Orchestration layout, where the users can drag and drop containers or patterns and orchestrate them in a solution, like we can do with the Visual Layout Tool or the Interactive Chart tool:
I'm looking forward to hear what you think.
Best
Hello all,
As of today, we can easily copy or duplicate a table with in-database tool.This is really useful when you want to have data in development environment coming from production environment.
But can we for real ?
Short answer : no, we can't do it in these cases :
-partitions
-any constraints such as primary-foreign keys
But even if these ideas would be implemented, this means manually setting these parameters.
So my proposition is simply a "clone table"' tool that would clone the table from the show create table statement and just allow to specify the destination path (base.table)
Best regards,
Simon
Hello all,
Here the issue : I have a workflow in my One Drive folder
In that workflow, I use a macro that writes a file with a relative path (..\6_Big_Data\EN\.csv ) :
Strangely, it doesn't work and the error message seems to relate to a folder that doesn't exist (but also, not the one I have set)
ErrorLink: Output Data (1): https://community.alteryx.com/t5/*/*/ta-p/724327?utm_source=designer&utm_medium=resultsgrid|Cannot access the folder C:\Users\saubert\OneDrive - Business & Decision\Documents\B&D_Market\6_Big_Data\EN\.
I really would like that to work :)
Best regards,
Simon
Hello all,
I'm currently learning Pythin language and there is this cool feature : you can multiply a string
Pretty cool, no? I would like the same syntax to work for Tableau.
Best regards,
Simon
Hello all,
So, right now, we have two very separated products : Alteryx Designer and Alteryx Designer Cloud. But what if you want to go from Alteryx Designer on your desktop to the cloud ?
well, you will have to rewrite every single workflow because you can't publish or import your current workflow on Alteryx Designer Cloud. You cannot export Designer Cloud workflow to Alteryx Designer on Desktop either.
This is a huge limitation on cloud implementation and sells and the ONLY product I know that's not compatible between on-premise and cloud.
Please Alteryx, this is a no-brainer situation if you want to convince your customers !
Best regards,
Simon
I can't even count how often I looked at an Excel, CSV or even YXDB file, where I KNEW that it was generated by Alteryx, but I couldn't remember the workflow. Currently, I have to simply go through all workflows I ever build and see if I can find it.
Theoretically, I could use a text-search across all workflows and see if I can find the output names - problem here: Most of my output filenames are generated dynamically on the run.
It would be amazing if Alteryx could simply write the Workflow name (maybe even path) into the metadata of a file.
(Screenshot from Google, as my os is set to German)
How about, we write "This file was created with by "Create Controlling Reports.yxmd on 2023-02-06 with Alteryx Designer 2021.4.298434" in the field 'Comments'?
This would make it extremely easy to find what workflow the file generated. I think it would be an option to talk about "filepath" instead of filename, but the filepath could include the local machine name, which might include GDPR information.
@Community: Is there any additional information that you'd like to see in the metadata?
Best
Alex
Hello,
Just like Monetdb or Vertica, Clickhouse is a column-store database, claiming to be the fastest in the world. It's available on Cloud (like Snowflake), linux and macos (and here for free, it's open-source). it's also very well ranked in analytics database https://db-engines.com/en/system/ClickHouse and it would be a good differenciator with competitors.
https://clickhouse.com/
it has became more popular than Greenplum that is supported : (black snowflake, red greenplum, orange clickhouse)
Best regards,
Simon
I've seen this question before and have run into it myself. I'd like to see a new tool that would allow a developer (of a workflow) to choose a path of logic based upon criteria known only during the execution of a module.
If LEFT INPUT Count of records < 10,000 THEN Path1 (e.g. use a calgary join)
ELSE Path 2 (e.g. use a standard join)
endif
Thanks,
Mark
Hello all,
MonetDB is a very light, fast, open-source database available here :
https://www.monetdb.org/
Really enjoy it, works pretty well with Tableau and it's a good introduction to column-store concepts and analytics with SQL.
It has also gained a lot of popularity these last years :
https://db-engines.com/en/ranking_trend/system/MonetDB
Sadly, Alteryx does not support it yet.
Best regards
Hello,
SQLite is :
-free
-open source
-easy to use
-widely used
https://en.wikipedia.org/wiki/SQLite
It also works well with Alteryx input or output tool. 🙂
However, I think a InDB SQLite would be great, especially for learning purpose : you don't have to install anything, so it's really easy to implement.
Best regards,
Simon
Hello all,
Change Data Capture ( https://en.wikipedia.org/wiki/Change_data_capture ) is an effective way to deal with changes in a database, allowing streaming or delta functionning. Several technos, more or less intrusive, can be applied (and combined). Ex : logs reading.
Qlik : https://www.qlik.com/us/streaming-data/data-streaming-cdc
Talend : https://www.talend.com/resources/change-data-capture/
Best regards,
Simon
Hi UX interested parties,
Here are some ideas for you to consider:
1. These lines are BORING and UNINFORMATIVE. I'd like to understand (pic = 1,000 words) more when looking at a workflow.
If you look at lines A, B, C in the picture above. Nothing is communicated. Weight of line, color of line, type of line, beginning line marker/ending line marker, these are all potential ways that we could see a picture of the data without having to get into browse everywhere to see the information. If we hover over the data connection, even more information could appear (e.g. # of records, size of file) without having to toggle the configuration parameters.
2. Wouldn't it be nice to not have to RUN a workflow to know last SAVED metadata (run) of a workflow? I'd like to open a "saved" workflow and know what to expect when I run the workflow. Heck, how long does it take the beast to run is something that we've never seen unless we run it.
3. I'd like to set the metadata to display SORT keys, order. Sort1 Asc, Sort 2 Desc .... This sort information is very helpful for the engine and I'll likely post about that thought. As a preview, when a JOIN tool has sorted data and one of the anchors is at EOF, then why do we need to keep reading from the other anchor? There won't be another matched record (J) anchor. In my example above, we don't ask for the L/R outputs, so why worry about the rest of the join?
4. Have you ever seen a map (online) that didn't display watermark information? I think that the canvas experience should allow for a default logo (like mine above, but transparent) in the lower right corner of the canvas that is visible at all times. Having the workflow name at the top in a tab is nice, but having it display as a watermark is handy.
5. Once the workflow has RUN, all anchors are the same color. How about providing GREY/White or something else on EMPTY anchors instead of the same color? This might help newbies find issues in JOIN configuration too.
6. If the tool has ERRORs you put a RED exclamation mark. I despise warnings, but how about a puke colored question mark? With conversion errors, the lines could be marked to let you know the relative quantity of conversion errors (system messages have a limit)
Just a few top of mind things to consider ....
Cheers,
Mark
Please add official support for newer versions of Microsoft SQL Server and the related drivers.
According to the data sources article for Microsoft SQL Server (https://help.alteryx.com/current/DataSources/SQLServer.htm), and validation via a support ticket, only the following products have been tested and validated with Alteryx Designer/Server:
Microsoft SQL Server
Validated On: 2008, 2012, 2014, and 2016.
This is one of the most popular data sources, and the lack of support for newer versions (especially a 2+ year old product like Sql Server 2017) is hard to fathom.
ODBC Driver for SQL Server/SQL Server Native Client
Validated on ODBC Driver: 11, 13, 13.1
Validated on SQL Server Native Client: 10,11
Hello,
More and more databases have complex data types such as array, struct or map. This would be nice if we could use it on Alteryx as input, as internal and as output, with calculations available on it.
https://cwiki.apache.org/confluence/display/hive/languagemanual+types#LanguageManualTypes-ComplexTyp...
Best regards,
Simon
We see canvasses every day where dozens fields are brought into a canvas or a macro, but never used - and this just creates slowness for no good benefit.
Given that one of the selling features of Alteryx is the speed of processing - could we look at three improvements to the Alteryx engine & designer:
I love Workflow Meta info, especially the ability to put the Author, the search tags,the version, the description, etc...
But why can't we use it as Engine Constant? It doesn't seem very hard to implement and it would change life for development.
Hey YXDB Bosses,
Let's move forward with our YXDB. Maybe give AMP a real edge over e1. Here are some things that could may YXDB super-powered:
Just a little more craziness from me
cheers
This idea has arisen from a conversation with a colleague @Carlithian where we were trying to work out a way to remove tools from the canvas which might be redundant, for example have you added a select tool to the canvas which hasn't been configured to change a data type or rename a field. So we were looking for ways of identifying in the workflow xml for tools which didn't have a configuration applied to them.
This highlighted to me an issue with something like the data cleanse tool, which is a standard macro.
The xml view of the data cleanse configuration looks like this:
<Configuration>
<Value name="Check Box (135)">False</Value>
<Value name="Check Box (136)">False</Value>
<Value name="List Box (11)">""</Value>
<Value name="Check Box (84)">False</Value>
<Value name="Check Box (117)">False</Value>
<Value name="Check Box (15)">False</Value>
<Value name="Check Box (109)">False</Value>
<Value name="Check Box (122)">False</Value>
<Value name="Check Box (53)">False</Value>
<Value name="Check Box (58)">False</Value>
<Value name="Check Box (70)">False</Value>
<Value name="Check Box (77)">False</Value>
<Value name="Drop Down (81)">upper</Value>
</Configuration>
As it is a macro, the default labelling of the drop downs is specified in the xml, if you were to do something useful with it wouldn't it be much nicer if the interface tools were named properly - such as:
So when you look at the xml of the workflow it's clearer to the user what is actually specified.
In short:
Add an option to cache the metadata for a particular tool so that it doesn't forget when using tool that have dynamic metadata such as batch macros or alteryx metadata engine can't resolve such as python tool.
Longer explanation:
The Problem:
One of the issues I often encounter when making dynamic workflows or ones that require calling external services is that Alteryx often forgets the metadata of what columns to expect. This causes the workflow to forget configuration of downstream tools when a workflow is first opened or when the metadata engine refreshes. There is currently the option to disable the metadata engine from automatically refreshing but this isn't a good option because you miss out on much of the value it brings.
Some of the common tools where I encounter this issue:
Solution:
Instead could we add an option to cache the metadata for a particular tool, this would save the metadata from the last time the workflow ran to within the workflows XML so that it persists when closed and reopened. Then when the metadata engine runs when it gets to this tool instead of resolving the metadata from the tool it instead uses the saved version in the XML. Obviously when it actually runs it would ignore this and any errors would still occur.
This could be an option in navigation pane of each tool. Mockup below:
This would make developing dynamic workflows far easier and resolve issues of configuration being lost when the metadata changes and alteryx forgets the options.
User | Likes Count |
---|---|
20 | |
6 | |
6 | |
6 | |
4 |