Zendesk JSON export - wrap in array and comma-separate each line
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
I have exported customer service ticket data from Zendesk in json format. The problem with the Zendesk json export is that it is not formatted in a normal way. Before I'm able to use the Alteryx Input Data tool for json, I need to wrap the exported data in an array and comma-separate each line.
With small enough export files, I am able to do this in Notepad++ by doing the follow -
- Add the following as a first line in the file
- {"tickets":[
- Add the following as a bottom line in the file
- ]}
- Separate all the objects enclosed in { } with commas using find and replace
For a large file, is there a way for me to replicate the wrapping and comma-separating processes within Alteryx so that I'm able to read in the json file and work with it?
Thanks for the help!
Solved! Go to Solution.
- Labels:
- Input
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
You can use the Sample tool in Alteryx to take the First record and add the first line. Or you can create the first line with a Text Input tool and Append tool to stack it on top of the data forming the first line.
For the last line, the Sample tool can be used also to extract that, make changes to it, and Append tool to stack it underneath the data.
For the body ie. between { and }, you can either use REGEX in the Regex Parse tool, to search for this and insert the commas, or you can filter this data out, and then use a simple String function to replace spaces with commas.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Thank you, @RishiK! Really appreciate the quick and clear answer.
One follow-up - I am currently unable to import the data as json given the formatting issue. What configuration on the Input Data tool would I use to be able to load the data and subsequently use the Regex Parse tool?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
If you are trying to format it as JSON just to parse in Regex maybe you should consider skipping the JSON formatting and just parsing the file as is? parsing is parsing....regex is awesome and if you have recognizable patterns just parse them.
The act of formatting as you describe sounds unnecessary if you are just feeding it into another tool to parse apart again...if you need to send it into some other application that needs JSON then that is another thing...
If you want to force the JSON format rather than sample tool, which is a good idea, consider...
- if the data is in rows and your main thing is modifying first and last row you could use a record ID tool to number each record and use a count in a parallel or other tool to identify the last record and use a formula with string functions or regex in formula to make the changes. RecordID = 1 tells you it is first row and you do that modification....RecordID = Count and you know it is the last row and do that change...in between use your choice of string functions or regex_replace to insert commas or other symbols as you need.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @laurany
You can use the following config to load in any text based file, xml, json, html, etc
Format=csv
Delimiters=\0
First row contains field names = unchecked
With this config, the input tool will load all lines from the file, one per row exactly as they are. From there, you can process the data as you see fit.
Dan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Good shout @danilang
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Thank you all for the pointers!
In the end, I did the following -
- Loaded as a CSV with no headers or delimiters
- Used select records to pull out two individual rows and a middle chunk (the order of the rows didn't matter)
- Used Formula to edit each type of row (first, last, and middle as needed)
- Used a Union to recombine them in the specified order
- Then was ready to start using JSON parse tool
