I have exported customer service ticket data from Zendesk in json format. The problem with the Zendesk json export is that it is not formatted in a normal way. Before I'm able to use the Alteryx Input Data tool for json, I need to wrap the exported data in an array and comma-separate each line.
With small enough export files, I am able to do this in Notepad++ by doing the follow -
For a large file, is there a way for me to replicate the wrapping and comma-separating processes within Alteryx so that I'm able to read in the json file and work with it?
Thanks for the help!
Solved! Go to Solution.
You can use the Sample tool in Alteryx to take the First record and add the first line. Or you can create the first line with a Text Input tool and Append tool to stack it on top of the data forming the first line.
For the last line, the Sample tool can be used also to extract that, make changes to it, and Append tool to stack it underneath the data.
For the body ie. between { and }, you can either use REGEX in the Regex Parse tool, to search for this and insert the commas, or you can filter this data out, and then use a simple String function to replace spaces with commas.
Thank you, @RishiK! Really appreciate the quick and clear answer.
One follow-up - I am currently unable to import the data as json given the formatting issue. What configuration on the Input Data tool would I use to be able to load the data and subsequently use the Regex Parse tool?
If you are trying to format it as JSON just to parse in Regex maybe you should consider skipping the JSON formatting and just parsing the file as is? parsing is parsing....regex is awesome and if you have recognizable patterns just parse them.
The act of formatting as you describe sounds unnecessary if you are just feeding it into another tool to parse apart again...if you need to send it into some other application that needs JSON then that is another thing...
If you want to force the JSON format rather than sample tool, which is a good idea, consider...
Hi @laurany
You can use the following config to load in any text based file, xml, json, html, etc
Format=csv
Delimiters=\0
First row contains field names = unchecked
With this config, the input tool will load all lines from the file, one per row exactly as they are. From there, you can process the data as you see fit.
Dan
Good shout @danilang
Thank you all for the pointers!
In the end, I did the following -
- Loaded as a CSV with no headers or delimiters
- Used select records to pull out two individual rows and a middle chunk (the order of the rows didn't matter)
- Used Formula to edit each type of row (first, last, and middle as needed)
- Used a Union to recombine them in the specified order
- Then was ready to start using JSON parse tool