community
cancel
Showing results for 
Search instead for 
Did you mean: 

Alteryx Designer Ideas

Share your Designer product ideas - we're listening!

1 Review

Our submission guidelines & status definitions before getting started

2 Search

The community for a solution or existing idea before posting

3 Vote

By clicking the like in the top left corner of an idea you support

4 Submit

A new idea to suggest a product enhancement or new feature


Suggest an idea
0 Likes

Hi,

 

The Adobe Analytics API token is currently set to expire 30 days after the call has been configured. When the token expires, I have to re-authenticate AND reconfigure the API call.

 

The API call shuoldn't expire when the token expires. Upon re-authenticating, the call should persist as it was originally configuired.

 

Thanks,

Mihail

 

 

0 Likes

Understanding that for some tools / data sets this feature would favor larger displays with more screen real estate, I think it would be helpful to be able to view both the input(s) and output(s) data in the results pane simultaneously via dedicated sub-results panes for each input / output. 

 

To try to put some form to the picture of it I had in my head, I patched together a few screenshots in SnagIt as a rough idea on what something like this could potentially look like to the end user, using the Join tool as an example (though I think it would be cool for all tools, if that is practical). Not sure how well it will show up in the embedded photo below, so I also also attached it - for reference, the screenshot was taken with Alteryx in full-screen mode on a 3440x1440 monitor.

 

Alteryx Feature Request - View all inputs & outputs in Results pane simultaneously v2.jpg

 

Not the prettiest - but hopefully you get the idea. A few general features I thought might be helpful include:

 

  • Clean / streamlined GUI elements for each "sub-pane"
  • Ability to toggle on / off any of the sub-panes
  • Ability to re-sized any of the "sub-panes" as desired
  • Ability to customize how many results are included in the results view of each input / output (i.e., either by row count or size)

 

I'm sure there would be several more neat features a view like this could support, but these are the ones I could think of offhand. To be clear, I wouldn't want this to replace the ability to click any individual tool input / output to only view that data if desired, but rather imagined this "view" could be optionally activated / toggled on and off by double clicking on the body of a tool (or something like that). Not sure whether this is feasible or would just be too much for certain large tools / datasets, but I think it would be the bees knees - am I the only one?

 

Let me know if anyone has any thoughts or feedback to share!

 

Josh

0 Likes

Hello Community,

 

 

I was wondering if there is a tool that could de-duplicate records after serializing (or after using Transpose Tool) with a given priority for each field in one of the keys? i.e.

 

IDOriginField NameValue
1ANAMEJACK
1BNAMEPETER
1BZIP CODE15024
1CZIP CODE15024
1DTYPEMID
1HTYPE

PKL

 

Assuming for the field name NAME, the priority should be [ A, B ]

ZIP CODE -> [ C, B ]

TYPE -> [ H, D ]

The expected outcome for Id 1 should be -> JACK, 15024, PKL

Record discarded -> PETER, 15024, MID

In this case I'm using ID and Origin as keys in the Transpose Tool.

 

I just want to make sure there is no other route than the Python Tool.

 

Thank you

 

Luis

0 Likes

Hi,

I did do some searching on this matter but I couldn't find a solution to the issue I was having. I made a Analytic App that the user can select columns from a spreadsheet with 140+ columns. This app looks at the available columns and dynamically updates the list box every time it is run.  I wanted users to use the built in Save Selection they don't have to check each box with the columns they want every time they run it. However I seem to have found an issue with the Save Selection option when the Header in the source has a comma in it e.g. "Surname, First Name".

As the saved YXWV file seems to save the selection in a comma delimited way but without " " around the headers. So as you can see in my example below when you try and load this again Alteryx appears unable to parse the values as it thinks "Surname" and "First Name"  are separate values/fields and not "Surname, First Name" and doesn't provide an error when it fails to load the selection.

Last Name=False,First Name=False,Middle Name=False,Surname, First Name=False,

So perhaps the Save Selection when writing the file can put string quotes round the values to deal with special characters in the Selections for the List Box. I have made a work around and removed special characters from the header in my source data but its not really ideal.
Thanks,
Mark

0 Likes

When using ConsumerView macro from Join tool palette for demographic data matching from Experian, the matching yield is higher than compared to Business Match marco. It would be great if the matching key for telephone number could be added to Business Match (US) tool the yield might increase and will provide more value to the firmographic data sets than it currently yields by matching just the D&B Business Names and addresses only.

 

0 Likes

When Alteryx 2018.4.3.54046 reads an xlsx, it can "introduce" a float where there was none before. This only happens in a specific situation and it's not clear WHY it is doing this. The actual numbers are exact to 2 and 3 decimal places in Excel. It isn't a matter of formatting or display. That's what they are. An example number: 1.43.

 

If there is a string value in the same column, then Alteryx will read the entire column as a string. So far, so good. When it does this, some of the numbers now look like floats. 1.43 now looks like 1.42999999998 in Alteryx. Not all of the numbers get this weird float treatment.

 

I cannot control the source files and they may have strings in various cells of the "numeric" fields. I have to read everything as strings. I want to know why this happens and what to do about it. Thank you.

0 Likes

 

I made a search on LDA - Linear Discriminant Analysis on Alteryx Help and it returned "0" Results.

 

Altryx LDA.jpg

 

Idea: LDA - Linear Discriminant Analysis tool

to be added on the predictive tool box.

 

 

 

Rationale: We have PCA and MDS as tools which help a lot on "unsupervised" dimentionality reduction in predictive modelling.

Bu if we need a method that takes target values into considerations we need a "supervised" tool instead...

 

Altryx LDA2.jpg

 

 

"LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data.[4] LDA explicitly attempts to model the difference between the classes of data. PCA on the other hand does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."

0 Likes

I would like to share some feedback regarding the Principal Component tool.

I've selected the option "Scale each field to have unit variance" and 1 of the 4 PCA tools was displaying errors. However, the error message is not very intuitive and I couldn't use it to debug my workflow. The problem was that for my type of data, scaling could not be applied since it had a lot of 0 values.

Couldn't find anything related to this, so hope my feedback helps others.

 

Thanks!

PCA Error.png

0 Likes

Hi there,

 

I noticed that here: https://help.alteryx.com/current/DataRobotAutomodelTool.htm it is mentioned that the number of workers can be set, and that the default is 2. However, this option seems to be removed, and the default seems to be max.

0 Likes

In a short workflow, this might not be necessary as the information related to each tool is spelled out in the progress windows.  However, in a complicated and lengthy workflow, tracing such msg can be a tedious task. In addition, using a tool with multiple outputs and only one output is selected while the residual outputs may be used to validate the result in the selected output; for example, joint tool where left or right output should be zero, a visual queue could be a quick way to alert operator on any potential problem.  Certainly, a browse tool can be added but in a big workflow, couple with a large data set, it might be a drain to the system resource. What if there is a tool that would activate a visual alert, like a light bulb, based on a preset condition to tell user that something is wrong and perhaps additional work needs to be done to either remedy or to account for the residual data. As in the case of a joint where 100% match is desired, any unmatched row would require an update to the reference list which maybe an additional adhoc process outside the current process. Certainly, an additional steps can be added to first explore the possibility of unmatched data and to update the reference list accordingly.  The workflow would in hold until 100% match is achieve. This would require additional system resource in order to hold; especially with large set of data and lengthy workflow.  If the unmatched situation rarely occurs, just a lightweight visual queue that 'pop' while allow the process either to break or to go through might be a sensible solution.  Just a thought.

0 Likes

Team,

 

It would be very useful if we could import Excel graphs as images. I've created graphs, tables, and charts using Alteryx tools from raw data (sql queries, etc.), but Excel offers more options. Generating customized emails with Excel graph images in the body instead of Alteryx charts would make this tool all the more powerful.

 

Idea is to pull in raw data through SQL queries, export data simultaneously to Tableau and Excel, pull back in the Excel graphs that are generated from that data, and create customized emails with links to Tableau workbooks, Excel file attachments, snapshots (graph images), and customized commentary. The Visual Layout tool is very handy for combining different types of data and images for email distribution, and importing Excel graphs as images would make this even better.

0 Likes

Current AWS S3 upload and download uses Long term keys with Access and Secret Key which sometimes causes security risk. 

 

Adding Short term keys to the tool - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html will help to use session keys that gets changed after specific duration .

 

Thanks 

0 Likes

I often copy/paste chunks of workflow and paste it into the same workflow (or a different one).  It always seems to paste just diagonally below the upper most left Tool.  This creates a real mess.  I'd like to be able to select a small area within the work area and have the chunk of workflow I'm pasting drop there - instead of on top of the existing build.

0 Likes

Experts:  

 

The Select Tool is great - except when it comes to reordering a large number of fields for a custom output, load etc.  Single clicking every time you need to move a field up or down is time consuming (the ability to highlight multiple fields and move them in unison is great - assuming they are already in the right order).  

 

I suggest two improvements to the Select Tool:

 

1) The ability to select a field and hold down either the "Up" or "Down" arrow so the field keeps moving without requiring one click per row and 

2) The ability to drag and drop fields (skip clicking altogether if desired)

 

The combination of these 2 functionalities (or even one of them) will make field reordering much more efficient when using the Select Tool.

 

Thanks! 

0 Likes

Hello All,

I received from an AWS adviser the following message:

_____________________________________________

Skip Compression Analysis During COPY
Checks for COPY operations delayed by automatic compression analysis.

Rebuilding uncompressed tables with column encoding would improve the performance of 2,781 recent COPY operations.
This analysis checks for COPY operations delayed by automatic compression analysis. COPY performs a compression analysis phase when loading to empty tables without column compression encodings. You can optimize your table definitions to permanently skip this phase without any negative impacts.

Observation

Between 2018-10-29 00:00:00 UTC and 2018-11-01 23:33:23 UTC, COPY automatically triggered compression analysis an average of 698 times per day. This impacted 44.7% of all COPY operations during that period, causing an average daily overhead of 2.1 hours. In the worst case, this delayed one COPY by as much as 27.5 minutes.

Recommendation

Implement either of the following two options to improve COPY responsiveness by skipping the compression analysis phase:
Use the column ENCODE parameter when creating any tables that will be loaded using COPY.
Disable compression altogether by supplying the COMPUPDATE OFF parameter in the COPY command.
The optimal solution is to use column encoding during table creation since it also maintains the benefit of storing compressed data on disk. Execute the following SQL command as a superuser in order to identify the recent COPY operations that triggered automatic compression analysis:
WITH xids AS (
SELECT xid FROM stl_query WHERE userid>1 AND aborted=0
AND querytxt = 'analyze compression phase 1' GROUP BY xid)
SELECT query, starttime, complyze_sec, copy_sec, copy_sql
FROM (SELECT query, xid, DATE_TRUNC('s',starttime) starttime,
SUBSTRING(querytxt,1,60) copy_sql,
ROUND(DATEDIFF(ms,starttime,endtime)::numeric / 1000.0, 2) copy_sec
FROM stl_query q JOIN xids USING (xid)
WHERE querytxt NOT LIKE 'COPY ANALYZE %'
AND (querytxt ILIKE 'copy %from%' OR querytxt ILIKE '% copy %from%')) a
LEFT JOIN (SELECT xid,
ROUND(SUM(DATEDIFF(ms,starttime,endtime))::NUMERIC / 1000.0,2) complyze_sec
FROM stl_query q JOIN xids USING (xid)
WHERE (querytxt LIKE 'COPY ANALYZE %'
OR querytxt LIKE 'analyze compression phase %') GROUP BY xid ) b USING (xid)
WHERE complyze_sec IS NOT NULL ORDER BY copy_sql, starttime;

Estimate the expected lifetime size of the table being loaded for each of the COPY commands identified by the SQL command. If you are confident that the table will remain under 10,000 rows, disable compression altogether with the COMPUPDATE OFF parameter. Otherwise, create the table with explicit compression prior to loading with COPY.

_____________________________________________

 

When I run the suggested query to check the COPY commands executed I realized all belonged to the Redshift bulk output from Alteryx.

 

Is there any way to implement this “Skip Compression Analysis During COPY” in alteryx to maximize performance as suggested by AWS?

 

Thank you in advance,

 

Gabriel

0 Likes

In the Configuration section of the Formula tool, the “Output Column” area is resizable.  However, it has a limit that needs to be increased.  Several of the column names I work with are not clearly identifiable with the current sizing constraint.  I do not think the sizing needs to be constrained. 

0 Likes

Please provide support for sharing a (gallery/Stored) Workflow Credential with a Group.  

 

Current capability appears only to support Users/Studios.

 

 

0 Likes

I have a process that sends out about 1,500 emails. Every once in a while, it will get stuck at some Percentage and I will have to eventually cancel the workflow, figure out how many emails were sent, and then skip that many emails in order to avoid sending duplicate emails. The process of figuring out how many were sent is currently taking the % of the tool at cancellation minus 50%(since that is where it starts), Multiplying it by 2, and then multiplying that % by the number of lines to get the approximate line of data where it froze up, and then reaching out to individuals to see if they received the email to narrow down exactly where the error occurred. 

 

Example: 60% - 50%= 10% * 2 = 20% * 1249 = 249.8.

 

This has been pretty accurate in the past, but obviously is not ideal. Is there no way for it to show us how many were sent even if we cancelled the workflow mid processing of the tool?

 

0 Likes

In Alteryx Designer (version 2018.3.5.52487)

I am getting error ORA-00001 Unique Constraint Violated for any error when updating my Oracle table.

Specifically, my workflow's output tool Output Options is "Update, Insert if new" so I should never get an ORA-00001 error.  If the record exists, it should be updated, if it does not exist, it should be inserted.  The update happens most of the time, however when the update fails, the actual error my database is raising is an ORA-20001 error with a custom error message that I want to pass all the way back to the person reading the workflow log.  When I run an update in the database I receive the correct error: ORA-20001: No overrides to Sales allowed on data in closed periods. INVOICE_NBR=12345.  But the error Alteryx is presenting my user is: Error: MyWorkflow_output: DataWrap2OCI::SendBatch: ORA-00001: unique constraint (MY_SCHEMA.MY_TABLE_PK) violated.

Am I misinterpreting the error in Designer or is the incorrect error being presented to me?

 

 

0 Likes

Our company is loving the Insight's Tool, but I am constantly being asked by users if they can export the data behind the graphs that is feeding in. For example we have an inventory dashboard for vehicles that starts at a Corporate level, but is drillable down to a "Regional" and then even more focused "Managed Area" level. Once users get down to the "Managed Area" level they want to export the line level data that is feeding into the Insight chart to actually view, work, and action the data at a vehicle level. 

 

Essentially an option to export the data feeding into the graphs. 

Top Liked Authors