Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Hi everybody! As you can read here I had the necessity to insert a macro (publish to tableau server) at the end of a self-made app.
I have actually found 2 different ways to solve the problem:
1) Turn the macro into an app and use 2 chained apps.
2) Copy and paste tools (normal and interface) from the macro to my app.
Both solution work, but both require some (quite a lot to be honest) editing and re-work that actually is already done. It's kind of like to re-invent the wheel!
A quick way to merge 2 configuration interfaces would be really usefull.
I understand the need for "exclusive rights" when using an input tool. Unfortunately, due to the nature of some corporate data, getting write access to a file is not always possible. I would like to have the ability to configure an input tool to open a file in "Read Only" mode while producing a warning message that the file was processed in that mode and may not contain the lastest version pf the data. I envision this as a checkbox option in the tool configuration panel.
Hi,
Recently in Feb 2016, Australia released the geocoded national address file to the public for no extra cost and will continually update this each quarter.
I think It would be a game changer to build this functionality natively into the alteryx product to enable any alteryx user simple access to it. also I think it would drive a lot of sales for the alteryx product.
http://www.data.gov.au/dataset/geocoded-national-address-file-g-naf
adrian
I have a problem where bulk upload is failing because the last column of the table that the data is being imported into, is using the DEFAULT data type option. I am not passing through any value to this column as I want the DEFAULT value specified to always be applied.
The COPY command fails in this scenario if you don't specify an explicit field list
More details of the problem can be seen in this post, along with a workaround:
A tick box option should at least be added to the bulk upload tool to enable explicit field list specification based on column names coming into the bulk upload tool
I have been going back and forth between output tool and render tool. The render tool works well when you want formatting. It's also great when you don't want an output created (when rec count=0) while output tool always generates a file regardless if there are records present or not. Output tool works well when you have a lot of fields but then you cannot easily control styling.
My issue is that I have a render tool connected to table tool. In the table tool everything looks neat; there's no wrapped text and no unnecessary white space (auto column width not necessary). However, with the render output, you don't know what to expect. Especially, when you have a lot of fields (30+) data gets truncated and column width is forcibly narrowed due to paper dimensions used in render tool. I skip letter and tabloid formats and now I have to mess with custom paper width (e.g.50) to get my reports looking right and when you have dynamic fields, this is not ideal.
Would it be possible to make the paper size/width automatic in the render tool just like in the table and layout tool? Then this tool also doesn't negate what layout/table tools do so well.
I have a module that queries a large amount of data from Redshift (~40 GB). It appears that the results are stored in memory until the query completes; consequently, my machine, which has 30 GB memory, crashes. This is a shame because Alteryx is good with maintaining memory <-> HDD balance.
Idea: Create a way to offload the query results onto the HDD as they are received.
In the old charting tool, you could change the order of the series by moving them up or down. This feature has been eliminated from the new charting tool. I want sales percent to be the top bar, but despite which 'series' I put it in, inventory percent is the top bar, as the order appears to be alphabetic. It would be nice to have that option added back in.
In the new charting tool, you can adjust the number of decimal places for your series, which is great. Adding '%' as a suffix to the series label would be nice. There is an option to add prefix/suffix to the axis label, but not to the series.
Horizontal axis (x-axis) displays at an angle. Having an alignment option (center; vertical; horizontal; etc.) is preferable.
It would be nice if I had a workflow that had multiple paths to be able to run only one line of it. This would allow me to only run what I needed so instead of having to run the entire workflow, I would be able to run up to a certain point and only run tools that are required to get to that point.
Other tools that I have used allow you to determine where you are caching from so instead of always having to cache at the input, you could cache after a big join. This would be great for efficiency as having to run everything through the entire workflow every time is innefficient and I end up spending a lot of time waiting for my workflow to go through the same tools.
For the purpose of debugging a workflow, I often filter just one customerID or any other ID to analyse the workflow.
With the Browse tool (ctrl-shift-B) you can just double click a cell and copy the value of it. This is not possible in the result tool, it would be nice if that would become possible.
Thanks,
Hans
It would be great if you could create default settings for the Tool Containers. As workflows become larger, I use containers a lot. But once I have 10-15 containers, I have to set all of them to have a Transparency of 1 and a margin of None. While the changes don't take long to make, it would be nice if they could be preset.
I'd like to be able to disable a tool container but not minimize it so I can still see what's in there. Maybe disabled containers could be grayed out the way the output tools are when you disable them. We would still need to retain current features in case people like it that way, but it would be nice to choose.
With SSIS, you can invoke user precedence contraint(s) to where you will not run any downstream flows until one or more flows complete. A simple connector should allow you to do this. Right now, I have my workflow(s) in containers, and have to disable / enable different workflows, which can be time consuming. Below is a better definition:
Precedence constraints link executables, containers, and tasks in packages in a control flow, and specify conditions that determine whether executables run. An executable can be a For Loop, Foreach Loop, or Sequence container; a task; or an event handler. Event handlers also use precedence constraints to link their executables into a control flow.
Hey guys!!
I was just thinking... they might not need to fully build out a python ide, but could still reach the same objective.
You should be able to keep a python file on its own and call it in r. By doing this, you might be able to have the json/xml handling of python with the visual/stats power of R while it being nicely bundled in your workflow. This uses base functions in r and does a good job turning a pandas dataset to an r dataframe you can move along your workflow.
You could always just use this same idea to write a file somewhere and once it's written, your workflow will continue. If you do, the code is literally 1 line in r... Anyway, let me know your thoughts! 🙂
Will this work for your organization?
Limit conversion warning allows for a minimum of 1 message. Can we set the minimum to 0 to completely ignore the message?
Perhaps we can allow warning messages a similar function as ERROR messages and allow the designer to Ignore, Warn or Cancel?
ConvError: Imputation (441): Tool #104: No demand: 0.200000000000031 had more precision than a double. Some precision was lost.
ConvError: Summarize (456): Data: 0.360000000004675 had more precision than a double. Some precision was lost.
End: Designer x64: Finished running FP Model - Marquee Crew v3.yxmd in 32.3 seconds with 16 field conversion errors and 4 warnings
Thanks,
Mark
Field Summary is a great tool, but would be nice to have a count and count not null on it.
Would save an extra select for parsing text files in correct format for dates and times.
It would be super helpful if there were a way to
1. have an active list of all inputs/outputs that, if the links were changed, would update the connection for every occurance of that input/output in the workflow
2. a similar list of formulas that could could simply reference in a formula tool, so if you have to change the source formula, it's automatically updated in all the linked occurrances of that formula.
In the designer it would be nice if the projection of a .shp file could automatically be read by its corresponding .prj file.
Both of these can be partially accomplished with the output of the directory tool:
- List Directories - Summarize unique list of directories from the directory tool output
- Exclude Paths - use filter tool to exlude files or directories based on patterns
However, here are some scenarios that aren't addressed cleanly:
- Directories are not listed unless there is a file contained within. The tool is called Directory, but it only lists files. The directory has to be non-empty to be listed. An option is needed to either list files and/or directories (including empty optionally)
- I can't figure out whether the file specification is a wildcard expansion only or supports regular expression for inclusion/exclusion. I see in the File Browse tool that you can list multiple formats e.g. Text Files (*.txt)|*.txt|All Files (*.*)|*.*. Here is a use case where this is required. Our network shares have Windows file restore snapshots stored in a ~snapshot directory. We don't want the directory tool to traverse this directory (because it literally takes hours to scan), but there isn't an elegant way to exclude it. If you filter it from the directory tool output, it's already scanned it. What we've done is generate the top-level directory list outside of the tool and fed it into a macro that has a directory tool (with sub-directory scanning enabled) inside.
- Another way to address this specific scenario woudl be to have an option to exclude traversal of "hidden" folders. But a more generic approach is ideal.
User | Likes Count |
---|---|
7 | |
7 | |
5 | |
3 | |
3 |