Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Feature: If an instance of Alteryx is already running then on double clicking a *.yxmd file in Windows Explorer it should open in a New Tab of existing instance instead of launching another instance of Alteryx.

 

Issue: Each new instance of Alteryx puts load on the system memory due to an additional AlteryxGui.exe process.

 

Workaround: Currently we can drag & drop the *.yxmd file from Windows Explorer onto the running Alteryx instance to open in a new tab of the current instance but the same behaviour on double clicking the *.yxmd would be highly appreciated. 

0 件の賞賛

Currently excel files with top lines frozen do not read into Alteryx. This causes extra manual work, as a default setting for the output of one of my reports freezes the top line automatically.

 

I know for - the most part - the Alteryx core data bundle is the only one part of allocate. It would be great if you could open up allocate to the user so we can add our own third party data sources. Just tell us what the requirements are to make our datasets ready for allocate and then we can load it ourselves. Then we can use the allocate workspace to query data in a similar way.

for example:

Geography(DMA,*)
Variable(CYADULT18P,CYADULT18P,False)

My fields names may be a little long (about 20 uppercase characters). As of today, I have to resize the column displaying the field name every time I browse the Select Tool, id est pretty much. If not blocking, it appears to be clearly frustrating that Alteryx doesn't save the size of the column...

Best regards.

Sans titre.png

It would be great if we could set the default size of the window presented to the user upon running an Analytic App. Better yet, the option to also have it be dynamically sized (auto-size to the number of input fields required).

Hello,

 

Please enable wildcard ability for the Amazon S3 Download Tool.

 

Add this to the "Object Name" field in configurations.

 

The current workaround is to use a macro to iterate over the filenames matching a pattern.

Adding this ability in the connector would remove the need for a macro.

 

Thank you.

 

Dennis

Allow Input Data tool to accept variable length (ie., variable number of fields) per record.  I have a file with waypoints of auto trips; each record has a variable number of points, eg., lat1, lon1, lat2, lon2, etc.  Right now I have to use another product to pad out all the fields to the maximum number of fields in order to bring it into Alteryx.

The DateTime tool is a great way to convert various string arrangements into a Date/Time field type. However, this tool has two simple, but annoying, shortcomings :

 

  1. Convert Multiple Fields: Each DateTime tool only lets you convert one field. Many Alteryx tools (MultiField, Auto Field, etc.) allow you to choose what field(s) are affected by the tool.  If I have a database with a large number of string fields all with the same format (such as MM/DD/YYYY), I should be able to use one DateTime tool to convert them all!
  2. Overwrite Existing Field: The DateTime tool always creates a new field that contains your converted date/time. I ALWAYS have to delete the original string field that was converted and rename the newly created date/time field to match the original string field's name. A simple checkbox (like the "output imputed values as a separate field" checkbox in the Imputation tool) could give the flexibility of choosing to  have a separate field (like how it is now) or overwrite the string field with the converted date/time field (keeping the name the same).

Alteryx is overall an amazing data blending software. I recognize that both of these shortcomings can be worked around with combinations of other Alteryx tools (or LOTS of DateTime tools), but the simplicity of these missing features demonstrates to me that this data blending tool is not sufficiently developed. These enhancements can greatly improve the efficiency of date handling in Alteryx.

 

STAR this post if you dislike the inflexibility of the DateTime tool! Thank you!

I am having large denormalized tables as input, and each time I need to scroll down approx 700+ fields to get an exhaustive view of fields that are selected (even if I have selected 10 out of 700 fields).

 

It would be helpful if along with having a sort on field name and field type, I can have an additional sort on selected/deselected fields. Additionally if I can get sort by more than one options i.e sort within an already sorted list that will help too - i.e. sorted selected first and inside that selected by field name.

 

I can get an idea of selected fields from any tool down the line (following the source transformation), but I would like to have an exhaustive view of both selected and unselected fields so that I can pick/remove necessary fields as per business need.

"Enable Performance Profiling" a great feature for investigating which tools within the workflow are taking up most of the time.This is ok to use during the development time.

It would be ideal to have this feature extended for the following use cases as well:

 

  • Workflows scheduled via the scheduler on the server
  • Macros & apps performance profiling when executed from both workstation as well as the scheduler/gallery

 

Regards,

Sandeep.

 

 

It was discovered that 'Select' transformation is not throwing warning messages for cases where data truncation is happening but relevant warning is being reflected from the 'Formula' transformation. I think it would be good if we can have a consistent logging of warnings/errors for all transformations (at least consistent across the ones based on same use cases - for e.g. when using Alteryx as an ETL tool, 'Select' and 'Formula' tool usage should be common place).

 

Without this in place, it becomes difficult to completely rely on Alteryx in terms of whether in a workflow which is moving/populating data from source to target truncation related errors/warnings would be highlighted in a consistent manner or not. This might lead to additional overhead of having some logic built in to capture such data issues which is again differing transformation by transformation - for e.g when data passes through 'Formula' tool there is no need for custom error/warning logging for truncation but when the same data passes through 'Select' transformation in the workflow it needs to be custom captured.

I do a lot of ETL with data cleanup. I'd really like to be able to output the log file of any processes run on my desktop Alteryx. This would also allow adding Info tools to capture changes. The log file could be parsed and recorded as processing metadata.

Is there a reason why Alteryx does not include hierarchical clustering?

 

Well it's sort of slow especially with huge data sets, computation effort increases cubic, but then when you need to do two step clustering,

"creating more than enough k-means clusters and joining cluster centers with hierarchical clustering" it seems to be a must...

 

P.s. Knime, SPSS modeler, SAS, Rapidminer has it already...

It seems that version 10.6 (still in beta) will have easy to use linear programming tool... We'll be able to allocate assets optimally, optimize our marketing decisions by inputting the predictions we had with predictive tools etc.

 

But when it comes to Non-linear models what happens? The idea is to add Alteryx designer an evolutionary optimization capability as well...

 

I've used a similar tool in excel which was very useful called Evolver; http://www.palisade.com/evolver/ It will be awesome to see that in the coming versions...

To note that one optimisation method does not rule them all and evolutionary algorithms are the slowest probably,

But I believe it will enable us to optimize hyperparameters of our models and greatly get better results...

 

Evopt.png

 

There is no way to search the S3 object list for a specific object which can make it impossible to find an object in a list of >1000. It would be great if there was some way of searching the object names similar to the SalesForce Input Connecter (10.5) which allows a user to start typing the name of a table to find it.

It would be great if the "fields from connected tool" option pulled fresh data at runtime when used in the gallery and pulling data from non-interface tools. The external source option doesn't have many settings (i.e. I can just point to one file), whereas the possibilities would be endless if I could use the full suite of tools to create a data set, at runtime, to pass to the list box/dropdown.

0 件の賞賛

Could the workflow name be retained when browsing for a YXZP save location instead of blanking it out as soon as you change folders?

After upgrading to version 10.5, my workflows become unreadable for community members on versions prior to it.  It would be nice if either the prior versions can (with warning) open the workflow or if I can readily export/downgrade the version header.  I understand that if the workflow contains elements unique to the new version that this would be problematic, but it would be helpful to have.

 

There is a NOTEPAD solution that I use where I edit the XML,

 

Thanks for consideration.

 

Mark


It would help if there is some option provided wherein one can test the outcome of a formula during build itself rather than creating dummy workflows with dummy data to test same.

 

For instance, there can be a dynamic window, which generates input fields based on those selected as part of actual 'Formula', one can provide test values over there and click some 'Test' kind of button to check the output within the tool itself.

 

This would also be very handy when writing big/complex formulas involving regular expression, so that a user can test her formula without having to
switch screens to third party on the fly testing tools, or running of entire original workflow, or creating test workflows.

It would be good if an option can be provided wherein on clicking a particular data profiling output (cellular level) one can see the underlying records.

 

May be configurator/designer can be given this option where she can select her choice of technical/business keys and when an end user (of Data Profiling report output) clicks the data profiling result he can be redirected to those keys selected earlier.

 

One option might be to generate the output of data profiling in a zip folder which would contain the data profiling results along with the key fields (hyperlinked files etc).

 

Since in such case even data would be maintained/stored, it would be good to either encrypt or password protect the zip file based on various industry standards.

 

This can be provided as an optional feature under something like advanced properties for the tool, making use of the industry best practices followed in context of report formatting and rendering.

 

The reason why this should be optional is, not always there might be a need to have the detailed linking back to source level records in place.

 

For e.g. if the need is only to highlight the Data Profiling outcome at a high level to a Data Analyst this might not be useful.

On the other hand if there is a need for the Data Steward to actually go and correct the data based on the Profiling results, the linking of profiling results back to source data might come handy.

トップ賞賛投稿者