Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.
Free Trial

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.
SOLVED

Multiple rows created by one Unique ID - need to truncate

Colejweinman
7 - Meteor

Hello!

 

I am brand new to Alteryx, and just had a question about working with some ODBC data. 

 

There is a report on our server that has a field for a unique ID. However, sometimes there are multiple occurances within each Unique ID that will create a new row for each occurance. I was wondering if there was a way to append each row to consolidate on each Unique ID, so that there are not extrannious rows relating to each Unique ID. Essentially, each new row shows up with a blank row, only displaying data for one column.

 

For Example:

 

Unique ID#1 (Data point 1) (Blank) (Blank)

Unique ID#1 (Blank) (Data Point 2) (Blank)

Unique ID#1 (Blank) (Blank) (Data Point 3)

Unique ID#2 (Data point 1) (Blank) (Blank)

 

and so on.

 

I was thinking the 'Union' tool would be the way to go, but I am a novice, so I just wanted to make sure I was able to do this correctly, since each run of the tool takes a lot of time and puts a lot of stress on the server. I just want to consolidate all the data for each Unique ID into one row, reducing the size of the table and allowing for faster access to the data. 

 

Thanks!!

3 REPLIES 3
jdunkerley79
ACE Emeritus
ACE Emeritus

You can use a multifield formula tool to replace the (Blank) values with null.

 

You can then use a Summarise tool to group by the UniqueID and pick the minimum value which will select one of the value in the list.

 

Have attached a demo work flow.

AdamR_AYX
Alteryx Alumni (Retired)

"since each run of the tool takes a lot of time and puts a lot of stress on the server"

 

You might want to take a look at the "Cache data" option on the input tool for your ODBC connections:

 

CacheData.png

 

If you check that option it will make a local cache of your data the first time you run your workflow and then use that cached data for subsequent runs.  It is ideal when you are developing a module which pulls data from a database that is slow to access or you don't want to put undue strain on while you are rerunning your workflow during the development process.

Adam Riley
https://www.linkedin.com/in/adriley/
Colejweinman
7 - Meteor

Worked perfectly. Thank you for your help!

Labels
Top Solution Authors