Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.
SOLVED

RegEx assistance

jay_chang
8 - Asteroid

Could I ask the community for help with what should be a simple regex expression?  Here is my RegEx tool settings for Tokenize:

 

(.+?)(?:-|$)

 

I am tokenizing my input to basically split at the 1st hypen.  However, the tool is continuing to split with subsequent hypens and since I'm parsing names, hypenated last names are breaking.  Here's an example:

 

string:

Customer Experience-Jonas-Smith, Teresa

 

Split 1:

Customer Experience

 

Split 2:

Jonas

 

 

I'm a n00b with regex so I have no idea how to fix this.  Any assistance would be appreciated.

 

 

7 REPLIES 7
estherb47
15 - Aurora
15 - Aurora

Hi @jay_chang 

 

If you need to split into two columns, you can try the Parse method instead of the Tokenize method

 

Your statement would be: (.+?)-(.*)

Let me know if that helps.

 

Cheers!

Esther

tothd
8 - Asteroid

(^.*?)-(.*), (.*$)

 

Specifically look for the first hyphen only(non-greedy)

Customer Experience

Jonas-Smith

Teresa

 

Second step, look for the comma. to break up the names

Jonas-Smith, Teresa

 

If hyphens will always occur after the 'Title', but not in every last name you are set.

jay_chang
8 - Asteroid

Sorry, maybe I'm not typing it correctly but when I try your regex, I get "Error: RegEx (399): The Regular Expression in ParseSimple mode can have 0 or 1 Marked sections, no more."?

 

 

Also, I perhaps did not state my problem correctly.  I want to split at the 1st hyphen and put the two tokens into 2 separate columns.  The "last name, first name" portion (whether hyphenated or not) should be in a column while the descriptive should be in a separate column.

estherb47
15 - Aurora
15 - Aurora
Hi

The Tokenize method doesn’t work with more than one marked group, which is
why I suggested the parse as an easier solution. Parse will separate into
two columns, as desired.

Tokenize works like text to columns, only you’re delimiting on patterns
rather than straight characters

For your example, you could use Text to columns, with a hyphen as the
delimiter, and separate into two columns only

Let me know if that helps.

Cheers!
Esther
--
Esther Bezborodko
*Senior Manager*
201.650.7314 | estherbezborodko@gmail.com
beautycounter.com/estherbezborodko

*Our mission is to get safe products in the hands of everyone.*
[image: Facebook]
jarrod
ACE Emeritus
ACE Emeritus

Hi @jay_chang,

As a leader in the Alteryx Community, I have the ability to identify & mark accepted solutions on behalf of community members - and recently did so on this thread. If you have any questions or concerns with the solution(s) I selected please let me know by replying to this post.

 

Both answers I marked are correct depending on how far you want to go with the RegEx Tool.


As the original author, you also have the ability to mark replies as solutions! Going forward, I’d encourage you to identify the solution or solutions that helped you solve your problem, as it's a big help to other community members. Learn more about Accepted Solutions here.

Thank you!

jay_chang
8 - Asteroid

sorry, I got tied up in other work and could not get to this until today.  I missed the part of the original solution where the poster suggested using Parse rather than Tokenize.  Once I made that change her regex worked perfectly.  I've marked the solution as correct.  Thank you.

jay_chang
8 - Asteroid

So the original reason I went with regex rather than text to columns was because I had situations where there were up to two hyphens prior to getting to the actual name (ie, "long group-important-muckamuck department-smith, john" so I was using the regex to count hyphens and parse differently based on that.  It seems my data cleaned up so I don't need the regex as much but I think I'll keep it just in case.  I might need to come back for more help at some point.  🙂

 

Thank you for your help!

Labels