Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Hi all,

 

When testing a macro with interface tools in use - the value that is used if running in normal execution (hit the big play button) is 0 or blank, irrespective of the value set for default on the component.

e.g.

  • put an up-down component on a canvas with a default value of 200
  • Then hook it up to a formula box
  • Then output the value 
  • The value which is output is 0

 

Please can you change this so that the value passed through the interface tools in testing mode is the specified & configured default value?

 

Thank you

Sean 

It's not uncommon to start out with an InputData control, and then 2/3 of the way through you realise that you need to change this to a dynamic input.

Could we add the capability to right-click on an inputData; and convert to Dynamic Input (just like you can on a TextInput to change it to a Macro Input)?

Originally posted here: https://community.alteryx.com/t5/Data-Sources/Input-Data-Control-Log-of-actual-workload-generated/m-...

 

Hi all,

 

Many people don't have access to SQL profiling tools to see what is ACTUALLY being run on the server, in order to be able to index the tables to optimize - So - it would be very helpful to understand the full set of transactions that are executed against a SQL server.   Tableau does this by having a performance mode, where you can see what queries were run and the time that each query took.

 

This would allow us to pump these through the SQL Database Engine Tuning Advisor (which takes a batch of SQL workload, and optimizes the DB for this workload through strategies such as indecies; partitioning; statistics; etc.

 

It may take a while to make this available in the UI, is there any way in the shorter term to make this available in a log file somewhere that we can unpick?

 

Many thanks

Sean

Originally posted here: https://community.alteryx.com/t5/Data-Sources/Input-Data-Tool-Can-we-control-use-of-Cursors/m-p/5871...

 

Hi there,

 

I've profiled a simple query using SQL Server Profiler (Query: Select * from northwind.dbo.orders; row limit: 107; read Uncommitted: true) and interestingly it opens up a cursor if you connect via ODBC or SQL Native; but not by OleDB - full queries and profile details are on the discussion thread above.

 

However - in some circumstances a cursor is not usable - e.g. https://community.alteryx.com/t5/Data-Sources/Error-SQL-Execute-Cursors-Not-supported-on-Clustered-C... because SQL doesn't allow cursors on columnstore indexed tables & columns

 

Is there any way (even if we need to manually adjust via the XML settings) to ask Alteryx not to create the cursor and execute directly on the server as written?

 

Thank you

Sean

(originally raised as discussion : https://community.alteryx.com/t5/Data-Sources/Input-Data-Tool-Record-Limit-control/td-p/58718)

 

Hi all,

 

When using a record limit on a database query - the actual query being executed on the server depends very much on the connection type (Native SQL; OleDB; and ODBC).    However, for all 3 of these, it seems that the record limit is being enacted on the client side, not on the server.   What I mean by this is that when I take the exact queries that are being run by Alteryx on the server (by looking at a SQL Profile trace on the server), and run these in a query window, you can see that the row-limit is not occurring in SQL, but in Alteryx.

(to test this, I ran several queries with and without the record limit; profiled them using SQL profiler; and the profile trace was identical either way)

 

Aside from putting "Select top(100) from..." in all the queries that that we create,  or using in-DB queries for every simple query - could we instead have an option to force the row limitation down to the server on a regular InputData tool, so that we can take advantage of the server's ability to optimize?

 

Thank you

Sean

 

I really like the Directory tool.  Its very handy, especially in combination with the Dynamic Input.

But... I'd like to see other object (files and folders) attributes, like object level security (who has read, write, full, etc), last user to access, and user that created.

ifyoucouldnotdothat.jpg

 

That has bugged me for years.

Hi, I've noted that there is not url-decode function in Alteryx. I guess I'm the first one to need that, so I'm posting this idea here. I think it would not be a big deal to do so if there's a url-encode function. 

 

Thanks. 

Idea: Support PLSQL Blocks  DECLARE/BEGIN/END syntax would be very helpful in the Output data tool .

This should allow either running multi-step SQL statements or calling stored procedures.

 

Rationale: Sometimes you need to run extra code or stored procedures after the data has been processed. It is also be much easier sometimes to re-use legacy code than try to recreate as a complex Alteryx macro with a bunch of R code.

The tool already allows calling procedures in SQL Server, but not supporting this in ORACLE is a big challenge for us.

Alteryx needs to package SAP, JD Edwards and ADNIS and BPCS connectors as part of it's native offering. This will increase the value proposition of Alteryx designer from just data blending to full self service ETL. Most of the large organizations have data extraction challenges and will experience business user empowerment and big productivity gains if native connectors and data extraction across major ERPs is enabled. Some of these productivity gains can in turn be used to make a business case for Alteryx designer licenses.

Hi all.

 

We're heavily using SalesForce and one of the reasons we've purchased so many licenses was support of SFDC.

However, our implementation is set up based on a Custom Domain.

Unfortunately at this point our connector does not support Custom Domains. 

Please, kindly review and possibly add SFDC authorization option that supports Custom Domain. Like Excel, Tableau and others does1.jpg

0 Likes

I keep making the same changes to the table tool rules, using the same formulas when I build new reports. For example, Row Rule 1: Font, bold; Background Color, green; Row Rule 2; Font, bold; Background Color, blue; Row Rule 3; Font, bold; Background Color, yellow. Each is based on a formula:  IsEmpty([Column Name]). I do this over and over and over again. The only thing that changes is the column name. It would be nice to have the Row Style Rules saved so they can be browsed to" or inserted.

 

Still waiting for the Default Table Settings to include "CENTER" in the header tab.

When the append tool detects no records in the source, it throws a warning. I would like to have the ability to supress this warning. In general, all tools should have similar warning/error controls.

I find that to do a simple concatenation of multiple fields, it takes multiple tools where it seems one would suffice. For example, if I had an address parsed into multiple fields (House Number, Street, Apt, City, State, Zip Code, Country), to combine these into a single address field, I'd have two options: Formula that manually adds each field with +' '+ in between each field, which is a lot of typing and selecting...Or Transpose data and then Summarize (concatenating) the values field with a space delimiter between each record.
 
Seems to me that a simpler solution would be a concatenate tool that might look and feel much like the Select tool, allowing you to choose a name for your concatenated string, input a delimiter, select the fields to concatenate, and re-order them within the tool. Bonus if it automatically converted everything to string fields (or at least allows you to designate whether you want to concatenate all your fields as numbers or strings, and then translates accordingly). Extra bonus if you also had the option to put a different delimiter after every field...
 
Not a super complex thing to do this task with the given tools, but it does seem like a fairly straightforward add that would likely save a whole bunch of folks at least a few minutes here and there.

0 Likes

I would like to see a way to partially execute  a workflow (specifically for an App) for the purposes of allowing user to make selections based on a dynamic data flow.

 

Ex:

1. Database Selection Interface

Click Next

2. Select from available columns to pass through to the output file.

Click Next

3. Pick from selected fields which fields should be pivoted.

Output file and complete run time

 

This was a simple example to explain a case, but the most common use I could see is for APIs. 

 

The dynamic input tool allows some fairly complex transformations to the underlying query - but it's not always easy to debug this when it doesn't behave as expected.

Could we add the ability to inspect the resulting query (just like you can on the InDB queries using the dynamic output component?)

 

It is currently possible to see this in the results / messages pane, but I can't find a way to get this into a data-stream to persist it or manipulate it.

With complex ETL jobs, we often have a very similar ETL process that needs to be run for multiple different tables (with different surrogate and natural key column IDs)

While you can do a bulk-replace by opening this up in notepad (in XML format) - it would be better if the user could do a find/replace for all instances of a table-name or a columnID from the designer UI (a deep find/replace into all the tools).

 

This can also be used when a field is renamed in the beginning of the flow, so that we can update this for the remainder of the flow without having to do this by trial/error.

 

 

In the browse tool, and the cell viewer attached to the browse tool - the standard control keys (control A for select all; control C for copy) do not behave as they would normally - in order to select all in the cell viewer, you have to right-click and say "Select All".

 

Please could you include these capabilities in the basic browse tool (control-A and control-C)?

 

Many thanks

Sean

Not sure if any of you have a similar issue - but we often end up bringing in some data (either from a website or a table) to profile it - and then an hour in, you realise that the data will probably take 6 weeks to completely ingest, but it's taken in enough rows already to give us a useful sense.

 

Right now, the only option is to stop (in which case all the profiling tools at the end of the flow will all give you nothing) and then restart with a row-limiter - or let it run to completion.   The tragedy of the first option is that you've already invested an hour or 2 in the data extract, but you cannot make use of this.

 

It feels like there's a third option - a option to "Stop bringing in new data - but just finish the data that you currently have", which terminates any input or download tools in their current state, and let's the remainder of the data flush through the full workflow.

 

Hopefully I'm not alone in this need 🙂

Top Liked Authors