Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
I'm glad that there is a Date filter function but I was wondering whether it could be changed?
I like how the ordinary Filter function has a true and a false output and I was wondering whether the Date Filter could have the same?
It would be extremely helpful if the Alteryx documentation expanded more on how to specify a basic character class from within the documentation page:
so that you could easily tell Alteryx what character class you want, as outlined here:
Currently, it is very hard to look at the documentation, and know what characters encompass what class. Adding this would be extremely useful. The only way I found the syntax was through the formula menu, which is depressing that its not on the function reference page itself:
https://community.alteryx.com/t5/Alteryx-Knowledge-Base/RegEx-Perl-Syntax-Guide/ta-p/1288
I'm stealing this idea from Tableau's number formatting, it's a timesaver.
In the DateTime tool if I've initially selected a value besides Custom in the "Select the format..." list then when I click Custom rather than having the Custom textbox be blank I'd like to have it automatically populated with whatever formatting string I just selected. Here's an example screenshot:
The tokenize would be more powerful if in addition to Drop Extra with Warning / Without Warning / Error, you could opt to have extra tokens concatenated with the final column.
Example: I have a values in a column like these:
3yd-A2SELL-407471
3vd-AAABORMI-3238738
3vd-RMLSFL-RX-10326049
In all 3 cases, I want to split to 3 columns (key, mlsid, mlsnumber), though I only care about the last two. But in the third example, the mlsnumber RX-10326049 actually contains a hyphen. (Yes, the source for this data picked a very bad delimiter for a concatenated value).
I can parse this a lot of different ways - here's how I do it in SQL:
MlsId = substr(substr(listingkey, instr(listingkey, '-')+1), 1, instr(substr(listingkey, instr(listingkey, '-')+1), '-')-1)
MlsNumber = substr(substr(listingkey, instr(listingkey, '-')+1), instr(substr(listingkey, instr(listingkey, '-')+1), '-')+1);
With Regex tokenize, I can split to 4 or more columns and then with a formula test for a 4th+ column and re-concatenate. BUT it would be awesome if in the Regex tokenize I could instead:
1. split to columns
2. # of columns 3
3. extra columns = ignore, add to final column
Very confusing.
DateTimeFormat
- Format sting - %y is 2-digit year, %Y is a 4-digit year. How about yy or yyyy. Much easier to remember and consistent with other tools like Excel.
DateTimeDiff
- Format string - 'year' but above function year is referenced as %y ?? Too easy to mix this up.
Also, documentation is limited. Give a separate page for each function and an overview to discuss date handling.
It would be great if you could include a new Parse tool to process Business Glossary concepts formatted using the SKOS (W3C) standard in the next version of Alteryx.
SKOS is a widely used standard for the representation of concept and term relationships. It provides a consistent way to define and organize concepts (including versioning), which is essential for the interoperability of these data.
We believe that supporting SKOS in Alteryx would be a valuable addition to the product. It would allow us to:
We understand that implementing support for this standards requires some development effort (eventually done in stages, building from a minimal viable support to a full-blown support). However, we believe that the benefits to the Alteryx Community worldwide and Alteryx as a top-quality data preparation tool outweigh the cost.
I also expect the effort to be manageable (perhaps a macro will do as a start) when you see the standard RDF syntax being used, which is similar to JSON.
SKOS, which stands for Simple Knowledge Organization System, is a W3C Recommendation for representing controlled vocabularies in RDF. It provides a set of classes and properties for describing concepts, their relationships, and their labels. This allows KOS to be shared and exchanged more easily, and it also makes it possible to use KOS data in Semantic Web applications.
SKOS is designed to be flexible and extensible, so they can be used to describe a wide variety. They are both also designed to be interoperable, so they can be used together to create rich and interconnected descriptions of data and knowledge.
Here are some of the benefits of using SKOS:
Here are some examples of how SKOS and DCAT are being used:
As the Semantic Web continues to grow, SKOS is likely to become even more widely used.
SKOS
RDF
Many of today's APIs, like MS Graph, won't or can't return more than a few hundred rows of JSON data. Usually, the metadata returned will include a complete URL for the NEXT set of data.
Example: https://graph.microsoft.com/v1.0/devices?$count=true&$top=999&$filter=(startswith(operatingSystem,'W...') or startswith(operatingSystem,'Mac')) and (approximateLastSignInDateTime ge 2022-09-25T12:00:00Z)
This will require that the "Encode URL" checkbox in the download tool be checked, and the metadata "nextLevel" output will have the same URL plus a $skiptoken=xxxxx value. That "nextLevel" url is what you need to get the next set of rows.
The only way to do this effectively is an Iterative Macro .
Now, your download tool is "encode URL" checked, BUT the next url in the metadata is already URL Encoded . . . so it will break, badly, when using the nextLevel metadata value as the iterative item.
So, long story short, we need to DECODE the url in the nextLevel metadata before it reaches the Iterative Output point . . . but no such tool exists.
I've made a little macro to decode a url, but I am no expert. Running the url through a Find Replace tool against a table of ASCII replacements pulled from w3school.com probably isn't a good answer.
We need a proper tool from Alteryx!
Someone suggested I use the Formula UrlEncode ability . . .
Unfortunately, the Formula UrlEncode does NOT work. It encodes things based upon a straight ASCII conversion table, and therefore it encodes things like ? and $ when it should not. Whoever is responsible for that code in the formula tool needs to re-visit it.
Base URL: https://graph.microsoft.com/v1.0/devices?$count=true&$top=999&$filter=(startswith(operatingSystem,'W...') or startswith(operatingSystem,'Mac')) and (approximateLastSignInDateTime ge 2022-09-25T12:00:00Z)
Correct Encoding:
Prezados, boa tarde. Espero que estejam todos bem.
A sugestão acredito eu pode ser aplicada tanto na ferramenta de entrada de dados, quanto na ferramenta de texto para colunas.
Existem colunas com campos de texto aberto e que são cadastrados por áreas internas aqui da empresa. Já tentamos alinhar para que esses caracteres, que muitas das vezes são usadas como delimitadores, não sejam usados nesses campos. Porém achei melhor buscar uma solução nesse sentido, para evitar qualquer erro nesse sentido.
A proposta é ser possível isolar essa coluna que existem esses caracteres especiais, para que não sejam interpretadas como delimitadores pelo alteryx, fazendo pular colunas e desalinhando o relatório todo.
Obrigado e abraços
Thiago Tanaka
Prezados boa tarde. Espero que estejam bem.
Minha ideia/sugestão vem para aprimoramento da ferramenta "Texto para Colunas" (Parse), onde podemos delimitar colunas com caracteres de delimitação.
Atualmente, a delimitação não ocorre pro cabeçalho, tendo que ser necessário outros meios para considerar o cabeçalho como uma linha comum, para depois torná-lo como cabelho, ou tratar somente o cabeçalho de forma separada.
Seria interessante que a propria ferramenta de texto para coluna já desse a opção de delimitar a coluna de cabeçalho da mesma forma.
Obrigado e abraços
Thiago Tanaka
The top screenshot shows the DateTime tool and the incoming string formats. It does not show examples. Please shows examples like the bottom screenshot. Thank you.
I am parsing retailer promotions and have two input strings:
1. take a further 10%
2. take an additional 10%
I am using the regex parse tool to parse out the discount value, using the following regex:
further|additional (\d+)%
When the input contains examples of both options (i.e 'further' and 'additional'), the tool only seems to parse the first one encountered.
E.g if I state the regex string as:
further|additional (\d+)%
It only parses line 1 above
And if I state the regex string as:
additional|further (\d+)%
It only parse line 2
In the DateTime tool, you should be able to specify AM PM. Some other programs I use would do this with an 'a' at the end. Here is an example of what I think it should be
MM/dd/yyyy hh:mm a
Input Date | Output Date |
09/10/2017 11:36 AM | 2017-09-10 11:36:00 |
09/10/2017 11:36 PM | 2017-09-10 23:36:00 |
Maybe I am missing something and this is already doable, but so far I haven't found a clean way to do it.
In trying to work with varying content and rules for extracting information, the regex tool requires a literal string.
The functions can accept a workflow parameter but I needed to parse out the results.
For now, I have only a few patterns but looking to handle a more general case. Thanks
Example:
Equipment Id | Type | Clean Equipment ID |
123L | Line | 123 |
123S | Substation | 123 |
S156 | Substation | 156 |
123X | Bus | 123 |
123L6 | Delivery point | 1236 |
If I want to create the 'Clean Equipment ID" I would have to use a complicated RegEx expression. Wouldnt it be easier for the end-user to have a function to do so? Like Exclude(string,character)?
In this case it could be: exclude([equipment id],"ABCDEFGHIJKLMNOPQRSTUVWXYZ") and if I wanted just letters it could be exclude([equipment id],"0123456789").
User | Likes Count |
---|---|
91 | |
10 | |
10 | |
7 | |
6 |