Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

I am currently using alteryx to generate pdf reports and noticed there is no option to have multiple row headers. In my line of work i cant skip this as the end users insist on having it like they have always had it.

I definitely think this should be available as many of us like replicate canned reports which are otherwise in excel and hence see the need for such an option.

 

The following might give an example of the multi row header,

 

multi row.JPG

 

Also the ability to merge certain columns to create the above effect should be available.

Please add the ability to specify indexes when creating a table with the Write Data In-DB tool.

 

When running Teradata SQL using the Connect In-DB tool I need to create a table on the database using the Write Data In-DB tool and do numerous updates before bringing the data to the PC.  Currently there is no way to create a unique primary index (or any other index) when the Write Data In-DB tool creates a table.  This causes Teradata to consume huge amounts of wasted space.  Today I created a table with 160 columns and 50K rows.  This consumed over 20 Gigabytes of data with 19.7 Gigabytes of wasted space.  In Teradata the way to control wasted space(skew) is by properly defining the index which can't be done today.

Pushing data to Salesforce from Oracle would bemuch easier if we were able to perform an UPSERT (Update if existing, Insert if not existing) function on any unique ID field in Salesforce. Instead of us having to do a filter to find the records that have or don't have an ID and run an Update or Insert based on the filter.

Would really love there to be a way to store environment related config variables without requiring the use of an external config 'file' that you need to bring in in every workflow.

 

Functionality should be similar to how the Alias manager works (although allowing aliasing of more than just DB connections)

 

The sort of things that would typically be included as such a variable would be:

  • contact email address for workflow failure/completion
  • other external log file location
  • environment name
  • environment specific messaging

If this could be set for different subscriptions or collections it would be fantastic. If not, at the server level would suffice.

It would be great if we could set the default size of the window presented to the user upon running an Analytic App. Better yet, the option to also have it be dynamically sized (auto-size to the number of input fields required).

It would be great if there was an option in the configuration of the Output Tool to create the output directory if it doesn't already exist. Maybe also to append instead of overwrite for all file types too?

When you use the Visual Query Builder, you can drag and drop tables to arrange them clearly (to show the star or snowflake schema, for instance). 

 

When you close the Visual Query Builder and reopen it, the tables are all left-aligned in a long column, with the joins overlapping each other. Since many of our tables are very wide (i.e., with many columns), this makes it cumbersome to locate the correct table and field.

 

I would like the manual positioning of the tables to be saved in the Visual Query Builder, to

  1. Make the logical arrangement clearer to the developer and later users
  2. Make it easier to locate tables/fields without scrolling downward

This is a feature that our users were very accoustomed to in Hyperion Intelligence, our legacy BI tool, which works similarly to the Visual Query Builder (shown below).

 

Hyperion Intelligence Model

It would be nice to have the ability to automatically generate twbx files from a master Tableau workbook so that end users can open the file in Tableau reader.  For example, if I were creating separate CSV files with my data for each state I would similarly want to create them as a ready to consume twbx file with Tableau Reader.

I understand the need for "exclusive rights" when using an input tool.  Unfortunately, due to the nature of some corporate data, getting write access to a file is not always possible.  I would like to have the ability to configure an input tool to open a file in "Read Only" mode while producing a warning message that the file was processed in that mode and may not contain the lastest version pf the data.  I envision this as a checkbox option in the tool configuration panel.

In the Report Map tool, I'm locked from changing the 'Background Color' menu, and the color appears to be set to R=253, G=254, B=255, which is basically white. 

 

However, when we use our TomTom basemap, we see that the background is actually blue, despite what's listed in the Background Color window.  (This goes beyond the 'Ocean' layer, and appears to cover all space 'under' the continents and ocean.)  Since we oftren print large maps of the east coast, this tends to use a lot of blue ink.  I've attached a sample image to illustrate this.

 

My solve to-date has been to edit the underlying TeleAtlas text file and change the default background (117 157 181) to white (255 255 255).  Unfortunately, we lose these changes with each data update.

 

Could Alteryx unlock the Background Color menu, and have it affect the 'base' layer, underneath oceans and continents in TomTom maps?  Not sure how it might affect aerial imagery.

I know there's the download but have a look at that topic, the easiest solution so far is to use an external API with import.io.

 

I'm coming from the excel world where you input a url in Powerquery, it scans the page, identify the tables in it, ask you which one you want to retrieve and get it for you. This takes a copy and paste and 2 clics.

Wouldn't it be gfreat to have something similar in Altery?

 

Now if it also supported authentication you'd be my heroes 😉

 

http://community.alteryx.com/t5/Data-Sources/Extract-a-table-from-Wikipedia/m-p/14531/highlight/fals...

 

Thanks

 

Tibo

I periodically consume data from state governments that is available via an ESRI ArcGIS Server REST endpoint. Specifically, a FeatureServer class.

 

For example: http://staging.geodata.md.gov/appdata/rest/services/ChildCarePrograms/MD_ChildCareHomesAndCenters/Fe...

 

Currently, I have to import the data via ArcMap or ArcCatalog and then export it to a datatype that Alteryx supports.

 

It would be nice to access this data directly from within Alteryx.

 

Thanks!

To get simple information from a workflow, such as the name, run start date/time and run end date/time is far more complex than it should be. Ideally the log, in separate line items distinctly labelled, would have the workflow path & name, the start date/time, and end date/time and potentially the run time to save having to do a calculation. Also having an overall module status would be of use, i.e. if there was an Error in the run the overall status is Error, if there was a warning the overall status is Warning otherwise Success.

 

Parsing out the workflow name and start date/time is challenge enough, but then trying to parse out the run time, convert that to a time and add it to the start date/time to get the end date/time makes retrieving basic monitoring information far more complex than it should be.

In a fututre release from SQL Server the datatypes text, ntext and image will be deprecated. It is already a bad datatype because you cannot use it as a "normal" character string. No equal to sometinh else in T-SQL on a text datatype.

 

As far as I know Alteryx defaults to text (my source is a PostgreSQL database) when creating the table in SQL Server. The datatype in Alteryx is Vstring. Instead of text or ntext it would be zo much better to use varchar(MAX) or nvarchar(MAX) when creating the table. Not only for compatiblity and later use in T-SQL (if any), but it is faster as well. Data from a varchar(MAX) column is stored in the same page as the record, as log as it fits.

When converting data types while In-DB, it would be really helpful if I could change the data type with the "Select In-DB" tool in a similar manner to the "Select" tool. Currently, we are having to use the "Formula In-DB" tool in order to create a "Cast" Statement.

Hi, I'm new to Alteryx; we've had for just about a month. We started publishing our workflows to Tableau and it's working great.

One issue I foresee:

User credentials to the Tableau server are updated occasionally. When this occurs, I will have to update the credentials manually in each workflow. 

The number of workflows we are publishing is growing. Is there a way to automate this process? 

The are a lot of SQL engines on top of Hadoop like:

  • Apache Drill / https://drill.apache.org/
    Schema-free, low latency SQL Query Engine for Hadoop, NoSQL and Cloud Storage
    It's backed up at enterprise level by MapR
  • Apache Kylin / http://kylin.apache.org/
    Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets, original contributed from eBay Inc.
  • Apache Flink / https://flink.apache.org/
    Apache Flink is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. The creators of Flink provide professional services trought their company Data Artisans.
  • Facebook Presto / https://prestodb.io/
    Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.
    It's backed up at enterprise level by Teradata - http://www.teradata.com/PRESTO/

 

My suggestion for Alteryx product managers is to build a tactical approach for these engines in 2016.

 

Regards,

Cristian.

Please add Parquet data format (https://parquet.apache.org/) as read-write option for Alteryx.

 

Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.

 

Thank you.

 

Regards,

Cristian.

There is "update:insert if new" option for the output data tool if using an ODBC connection to write to Redshift.

This option really needs to be added to the "amazon redshift bulk loader" method of the output data tool, and the write in db tool.

 

Without it means you are forced to use the "Delete and append" output which is a pain because then you need to keep reinserting data that you already have, slowing down the process.

 

Using the ODBC connection option of the output data tool to write to Redshift is not an option as it is too slow. Trying to write 200MB of data, the workflow runs for 20 minutes without any data reaching the destination table. End up just stopping the workflow.

It would be good to have the ability to select what column to use for Primary key when using the "create new table" output option of the output data tool.

 

When using the "update: insert if new" output option, you receive the error "Primary Key required for Update" if table does not have primary key.

 

Workaround is to manually create table with primary key constraint.

Top Liked Authors