Get Inspire insights from former attendees in our AMA discussion thread on Inspire Buzz. ACEs and other community members are on call all week to answer!
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

https://orc.apache.org/

 

Apache ORC is commonly used in the context of Hive, Presto, and AWS Athena. A common pattern for us is to use Alteryx to generate Apache Avro files, then convert them to ORC using Hive. For smaller data sizes, it would be convenient to be able to simply output data in the ORC format using Alteryx and skip the extra conversion step.

 

ORC supports a variety of storage options that users may wish to override from sensible Alteryx defaults. We typically use the SNAPPY compression codec.

90% of the time when dragging in an input tool I need to drag in a select tool to pick only the fields that you want.  Best practice suggests this should be 100% of the time for efficiency.  Embeding this functionality within the input tool itself would save a step.

I need to be able to connect to Salesforce CampaignInfluence object which is only available through API v37 or later. Currently, the Salesforce Connectors of the REST API is on v36 whereas the latest version is v41 (about a year gap between v36 and v41). I was told that there was no immediate plan to update the default connector to the latest version. It would be nice to have visibility on objects available on the newer versions.

Please add XBRL - eXtensible Business Reporting Language (https://www.xbrl.org/Wikipediahttp://www.xbrleurope.org/ ) as output file format.

 

XBRL is based on XML and is used in financial word, for example all public companies in USA send their financial reports to Stock Exchange Commison in XBRL format. (http://xbrl.sec.gov/)

In Japan  Central Bank and Financial Services Agency (FSA) are collecting financial data for banks and financial companies using XBRL format.

 

Thank you.

 

Regards,

Cristian

I have a PDF of 27 pages and each page is identical.  The headers, footers and data are static in positioning on each page. It would be great if I could define the text to parse out on the first page, then that could be used to parse out all of the pages in the PDF.  It would make the tool far more useful.

Only csv is provided (and json etc) but not .xlsx

Let us know when this can be added to Alteryx

Thanks

 

Would really love there to be a way to store environment related config variables without requiring the use of an external config 'file' that you need to bring in in every workflow.

 

Functionality should be similar to how the Alias manager works (although allowing aliasing of more than just DB connections)

 

The sort of things that would typically be included as such a variable would be:

  • contact email address for workflow failure/completion
  • other external log file location
  • environment name
  • environment specific messaging

If this could be set for different subscriptions or collections it would be fantastic. If not, at the server level would suffice.

There is "update:insert if new" option for the output data tool if using an ODBC connection to write to Redshift.

This option really needs to be added to the "amazon redshift bulk loader" method of the output data tool, and the write in db tool.

 

Without it means you are forced to use the "Delete and append" output which is a pain because then you need to keep reinserting data that you already have, slowing down the process.

 

Using the ODBC connection option of the output data tool to write to Redshift is not an option as it is too slow. Trying to write 200MB of data, the workflow runs for 20 minutes without any data reaching the destination table. End up just stopping the workflow.

Would be nice if Alteryx had the ability to run a Teradata stored procedure and/or macro with a the ability to accept input parameters.  Appears this ability exists for MS SQL Server.  Seems odd that I can issue a SQL statement to the database via a pre or post processing command on an input or output, but can't call a stored procedure or execute a macro.  Only way we can seem to call a stored procedure is by creating a Teradata BTEQ script and using the Run Command tool to execute that script.  Works, but a bit messy and doesn't quite fit the no-coding them of Alteryx.

I periodically consume data from state governments that is available via an ESRI ArcGIS Server REST endpoint. Specifically, a FeatureServer class.

 

For example: http://staging.geodata.md.gov/appdata/rest/services/ChildCarePrograms/MD_ChildCareHomesAndCenters/Fe...

 

Currently, I have to import the data via ArcMap or ArcCatalog and then export it to a datatype that Alteryx supports.

 

It would be nice to access this data directly from within Alteryx.

 

Thanks!

Hi currently the s3 upload tool only allows file format of *.yxdb , *.json, *.csv and *.avro

 

In order to optimize loading to redshift, it would be good to have a few more functions

1. Ability to s3 upload with *.gz format

eg: Reading in a file using the input tool -> s3 upload tool (which has a gzip function with the following options - record limit, delimiter, UTF8)  

http://docs.aws.amazon.com/redshift/latest/dg/t_loading-gzip-compressed-data-files-from-S3.html

2. Change max record limit, delimiter, UTF8 format

3. Change the objectName to 'take file/table name from field' with filename containing filename or part of filename similar to the 'Output tool'

 

Adrian

 

In the new Intelligence Suite tools, it would be extremely useful to have the option to add n-gram (combining words/tokens ) in the Topic Modeling Text Mining Tool. 

 

This is important in many NLP topic modeling scenarios.

It would provide more flexibility to build better NLP models.

 

For details on n-gram

https://en.wikipedia.org/wiki/N-gram

 

 

The are a lot of SQL engines on top of Hadoop like:

  • Apache Drill / https://drill.apache.org/
    Schema-free, low latency SQL Query Engine for Hadoop, NoSQL and Cloud Storage
    It's backed up at enterprise level by MapR
  • Apache Kylin / http://kylin.apache.org/
    Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets, original contributed from eBay Inc.
  • Apache Flink / https://flink.apache.org/
    Apache Flink is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. The creators of Flink provide professional services trought their company Data Artisans.
  • Facebook Presto / https://prestodb.io/
    Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.
    It's backed up at enterprise level by Teradata - http://www.teradata.com/PRESTO/

 

My suggestion for Alteryx product managers is to build a tactical approach for these engines in 2016.

 

Regards,

Cristian.

 
 Add native support of Python, C#, and Java

In the previous tools the information lab had build for publishing to Tableau server, they had the incremental TDE refresh option available. I would like to see that included in the Publish to Tableau Server Macro. We often just want to add previous day data to a YTD data extract without running the full data set from our Datawarehouse. The full set takes long and a daily increment / add only would take a couple minutes.

I really like the Tool Container. I also really like to have neat and tidy modules. Sometimes though, the two are in conflict because the Tool Container automatically sizes itself so I end up playing around with tool placement to get my containers the same.

Could you please add the option to make the Tool Container a sizeable object (like the Explorer Box) or give width and height value boxes in the tool properties?

With a module that contains a lot of tool containers, it would be nice to have an option (similar to Disable All Tool That Write Output in the RunTime TAB) to disable all Tool Containers and then I can go pick the one or two that I would like to enable.

I am using the Distance Tool and would like to get the polyline that represents the drive distance.  I need to output the drive polyline for multiple points and determine the percentage of overlap between routes and the number of times overlapped.
On the “Multi-Field Formula” tool, the default is to “copy output fields and Add….”  I think the default should NOT copy. I’m using this tool to trim all of the blank spaces, and change the case in text fields. I often forget to uncheck it and end up with all these additional fields at the end. With regard to the tool container, I think the default margin should be small. I build huge workflows, and putting each section in a tool container. I have to go in and change each one to small margins to condense the workspace. Perhaps in the user settings, under document, there could be a default margin option, just as there is a container color option.
Top Liked Authors