Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
In the Output Data tool, when the file type is YXDB, one of the options is to Save Source & Description.
Splitting these into "Save Source" and "Save Description" -- independent options -- would be useful.
Sometimes I don't want the file recipient to know how a field was derived but I want to include a description.
Right now there is no easy way of doing this.
Hi,
I recently started to take online trainings on Alteryx and came across Tile Tool. Though could understand the purpose to some extent, it would be great if the tool is explained in more depth along with more examples and also how the tile number is calculated for the examples.
Thanks,
Saipriya.
Occasionally, the Calgary Loader tool will not write out all fields passed to it. This seems to happen after writing out a certain number of fields then later, when rerunning, adding a new output field. Very annoying because you don't know it will happen until processing is complete and you examine the result. I usually manually delete the calgary files prior to rerunning, to avoid the versioning, but it still happens.
Also, please make the versioning optional with a check box, default off.
Hello,
I work with alteryx databases a lot to store some historical data so I don't have to pull it in the future. It would be great if there was a way when creating the database, where I could lock them for editing and can only be edited by select usernames and passwords.
Thanks,
Chris
I often copy/paste chunks of workflow and paste it into the same workflow (or a different one). It always seems to paste just diagonally below the upper most left Tool. This creates a real mess. I'd like to be able to select a small area within the work area and have the chunk of workflow I'm pasting drop there - instead of on top of the existing build.
Hello All,
I received from an AWS adviser the following message:
_____________________________________________
Skip Compression Analysis During COPY
Checks for COPY operations delayed by automatic compression analysis.
Rebuilding uncompressed tables with column encoding would improve the performance of 2,781 recent COPY operations.
This analysis checks for COPY operations delayed by automatic compression analysis. COPY performs a compression analysis phase when loading to empty tables without column compression encodings. You can optimize your table definitions to permanently skip this phase without any negative impacts.
Observation
Between 2018-10-29 00:00:00 UTC and 2018-11-01 23:33:23 UTC, COPY automatically triggered compression analysis an average of 698 times per day. This impacted 44.7% of all COPY operations during that period, causing an average daily overhead of 2.1 hours. In the worst case, this delayed one COPY by as much as 27.5 minutes.
Recommendation
Implement either of the following two options to improve COPY responsiveness by skipping the compression analysis phase:
Use the column ENCODE parameter when creating any tables that will be loaded using COPY.
Disable compression altogether by supplying the COMPUPDATE OFF parameter in the COPY command.
The optimal solution is to use column encoding during table creation since it also maintains the benefit of storing compressed data on disk. Execute the following SQL command as a superuser in order to identify the recent COPY operations that triggered automatic compression analysis:
WITH xids AS (
SELECT xid FROM stl_query WHERE userid>1 AND aborted=0
AND querytxt = 'analyze compression phase 1' GROUP BY xid)
SELECT query, starttime, complyze_sec, copy_sec, copy_sql
FROM (SELECT query, xid, DATE_TRUNC('s',starttime) starttime,
SUBSTRING(querytxt,1,60) copy_sql,
ROUND(DATEDIFF(ms,starttime,endtime)::numeric / 1000.0, 2) copy_sec
FROM stl_query q JOIN xids USING (xid)
WHERE querytxt NOT LIKE 'COPY ANALYZE %'
AND (querytxt ILIKE 'copy %from%' OR querytxt ILIKE '% copy %from%')) a
LEFT JOIN (SELECT xid,
ROUND(SUM(DATEDIFF(ms,starttime,endtime))::NUMERIC / 1000.0,2) complyze_sec
FROM stl_query q JOIN xids USING (xid)
WHERE (querytxt LIKE 'COPY ANALYZE %'
OR querytxt LIKE 'analyze compression phase %') GROUP BY xid ) b USING (xid)
WHERE complyze_sec IS NOT NULL ORDER BY copy_sql, starttime;
Estimate the expected lifetime size of the table being loaded for each of the COPY commands identified by the SQL command. If you are confident that the table will remain under 10,000 rows, disable compression altogether with the COMPUPDATE OFF parameter. Otherwise, create the table with explicit compression prior to loading with COPY.
_____________________________________________
When I run the suggested query to check the COPY commands executed I realized all belonged to the Redshift bulk output from Alteryx.
Is there any way to implement this “Skip Compression Analysis During COPY” in alteryx to maximize performance as suggested by AWS?
Thank you in advance,
Gabriel
I have a process that sends out about 1,500 emails. Every once in a while, it will get stuck at some Percentage and I will have to eventually cancel the workflow, figure out how many emails were sent, and then skip that many emails in order to avoid sending duplicate emails. The process of figuring out how many were sent is currently taking the % of the tool at cancellation minus 50%(since that is where it starts), Multiplying it by 2, and then multiplying that % by the number of lines to get the approximate line of data where it froze up, and then reaching out to individuals to see if they received the email to narrow down exactly where the error occurred.
Example: 60% - 50%= 10% * 2 = 20% * 1249 = 249.8.
This has been pretty accurate in the past, but obviously is not ideal. Is there no way for it to show us how many were sent even if we cancelled the workflow mid processing of the tool?
Our company is loving the Insight's Tool, but I am constantly being asked by users if they can export the data behind the graphs that is feeding in. For example we have an inventory dashboard for vehicles that starts at a Corporate level, but is drillable down to a "Regional" and then even more focused "Managed Area" level. Once users get down to the "Managed Area" level they want to export the line level data that is feeding into the Insight chart to actually view, work, and action the data at a vehicle level.
Essentially an option to export the data feeding into the graphs.
Since we know Alteryx uses R for a lot of its predictive and data analysis tools. It takes a while to run the workflow whenever there is R based tool is involved. I was told by a solution engineer that its because its opening and closing R in the background.
Sometimes my workflow has a bunch of tools which are running R in the background and it takes forever to run the workflow.
I think there should be a user setting which allows user to choose if the want to start R along with Alteryx and keep it running in the background.
Thanks,
Hi there,
As a beginner in Alteryx with experience in other analytics software, I noticed that there may be a very simple thing that I think could be adjusted which I feel could improve the experience of a beginner in Alteryx. Also happy to know if this is already possible.
When I was doing a introduction training, I noticed that a lot of the questions were regarding not being able to see the right output, regardless of the usage of the right tools & settings. Luckily, we were provided with a good trainer that immediately saw that there was a very simple reason for this: the 'output' button (sometimes called differently, for instance in a select it is called 'true' or 'false') was not selected. Instead people were looking at the input or something else. I can even imagine that some more advanced users have spend a few minutes wondering what was wrong until they realised they weren't looking at the output.
It seems to me to be a bit random when output or input gets selected, and as someone with experience in (preventing) addiction in the gaming industry, I know that the first experience is crucial for someone to get 'hooked' :-), and this small inconsistency seems to break the flow a bit. Could you make the default setting such that a tool shows the output rather than the input by default? A possible addition would be an option that switches a tool back to input every time a button gets deselected. From a programmers/data science perspective, that would also make a lot of sense.
Regards,
Charles
The older versions of the Publish to Tableau Server Macro had an option to Request an authentication token however the latest version does not. Please return this option to the tool as it is very useful for constructing Rest API call scripts.
Thank you!
~ Eric Marowitz
Please allow a hover over that would show you the value of a variable in the formula tool. At times I get long formulas and it would be nice to see the values of each variable by just putting your mouse on top of it. Just show the first row like the preview. There is similar functionality in visual studio and it makes coding easier.
Hi All,
I believe the following would help improve the functionality of Select Tool.
The idea is to have a defaulting option for each of the field in the Select Tool (which I believe should be a light weight Tool i.e. not adversely impacting performance and gives best exhaustive picture of all columns flowing through a (/particular point in ) pipeline).
Following are some of the cases where defaulting might come handy -
1) Fields which are supposed to hold monetary data - instead of Null, one can put 0.00 to help roll up summary properly.
2) Fields which are supposed to hold dates (say expiry date) - instead of Null, one can put some enterprise standards like 31-12-2099 to avoid mixing Nulls and 31-12-2099.
3) Fields which are supposed to hold purchase quantity/number of employees/number of merchandise - instead of Null, one can put 0 (and not 0.00) again to help with roll up summary.
4) Fields which are supposed to hold Currency - instead of Null, one can put USD.
5) Fields which are supposed to hold dates (say this time create date) - instead of Null, one can hardcode actual date, or an additional feature to put Now() kind of functions.
At present one of the options of achieving same might be to put a Formula Tool and to code whatever is desired inside the Formula Tool.
Benefits of having the functionality inside Select Tool would be -
1) It would be more user friendly and call for faster build to just write '0.00' or 'USD' or '31-12-2099' as compared to writing IF IsNull()... statements.
2) Inside Formula Tool, user needs to pull desired fields from the drop down and hence exhaustive view of all fields passing through pipeline is not available.
Pain in selection of fields from drop down and writing actual formulas might be aggravated with the number of columns increasing and might be more prone to human omission related errors.
Thanks,
Rohit Bajaj
Can a spell check option be included that will check the spelling in the comment tool text box and tool container captions?
Ideally a global check would be nice for the annotation section of each tool, especially if it can be determined if I changed the annotation from the default one on some tools.
Hello,
If I go to Options --> Advanced Options --> System Settings, why do I have to click [Next] button several times before I can get to the "Engine" tab at the very bottom? Why not simply create a user-friendly UI screen where we could directly navigate to the section we desire?
Please improve the UI.
Thanks!
There should be a macro which could be used as read input macro for in-db tools.
Similarly, there should be a write macro for in-db tools.
If you copy a text box named ClientCode to another workflow, the name of the box will be reset to text box(#) in that new workflow. That can be a snag if the workflow the text box is contained in is deployed to gallery as an app that is ran via the gallery API... the API parameters will be looking for a text box named ClientCode but is now named text box(#). This happens almost in the background without the developer knowing they have renamed the text box and the ES eventually failing. This can be annoying; it would be great if the name were inhereted.
If you copy a text box named ClientCode to another workflow, the name of the box will be reset to text box(#) in that new workflow. That can be a snag if the workflow the text box is contained in is deployed to gallery as an app that is ran via the gallery API... the API parameters will be looking for a text box named ClientCode but is now named text box(#). This happens almost in the background without the developer knowing they have renamed the text box and the ES eventually failing. This can be annoying; it would be great if the name were inhereted.
This is more of an enhancement than a new idea. When building an application and upon success using separate browsers to display the results, it would be nice to be able to give the browser windows a title. Currently you see Browse (22) and Browse (38) etc. My app checks a certain key value in multiple tables/files and presents the table results if found. I need to rename the data to know which file the data is coming from whereas if the browser windows had a title, you would know from which file they represent. The titles could be added in the interface designer (see attached)
During development it seems the syntax checker or whatever process runs behind the scenes after a tool is modified reviews the full workflow.
Ex - Just from observation if I modify the file name in an output tool I don't see why it would rerun the full syntax check process.
This reduce the time waiting to continue development.
User | Likes Count |
---|---|
7 | |
7 | |
5 | |
3 | |
3 |