JSON Parse
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi Community!
Its been almost a month of me trying to parse this file. I'm leaning towards you for some assistance. Any help would be GREATLY appreciated.
I'm uploading this as a .csv file, no delimiters. The issue is that inputting gives no unique identifiers..
Solved! Go to Solution.
- Labels:
- Common Use Cases
- Custom Tools
- Parse
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @mystasz ,
there's not a lot of data in that file, but using the JSON Parse tool results in the following:
The workflow uses the JSON Parse tool as follows:
What else are you trying to pull from the data?
Once you have the data in this format, you can use the Text to Columns tool to separate the JSON_Name column by the "." delimiter.
I've attached the workflow showing the use of the JSON Parse tool and how to configure it.
M.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hello! There's definitely much more data than that and that is what my initial input looked like as well until I changed the input settings (below). I'm trying to create an easily readable document that I can output to excel. This document contains a whole lot of gibberish and parsing out rows, columns and its values has been a nightmare.
I'm thinking the input settings are incorrect which is why its become so difficult to parse. But believe me when I say there's more data! lol
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @mystasz ,
Yes, I can see the text was truncated.
I've built this for you now.
The primary issue with this JSON file is it has been built with a varying number of delimiters, and the delimiters mean different things. This often happens with poorly constructed JSON files, and so you end up with one delimiter meaning a different section or column, and the other meaning a different type of the same thing within a column.
The perfect example of this is with the "Given_Name" field, which is denoted differently according to "Assignee" and "Submitted by", for example.
I've built it so it automatically determines where this field is "duplicated". If it is, it takes the last two populated columns after the text to columns splits the field. This denotes the last two sections, and concatenates these two fields together to create Assignee Given Name and Submitted By Given Name.
This is relatively complex, but basically it takes those where a unique field name is pulled directly from the parsed JSON, split to columns. Those that are duplicated then append the section before.
Here is an example of the output:
I hope this helps,
M
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
This is great!!!!!!!!!
I attempted to this this on my larger data set which should output about 164 rows, but this workflow and the one I've been struggling with gives 20. Any idea why? Attached document.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @mystasz ,
because there are only 20 records. I see no reason why it would parse into any more. The ID delimiter only goes to 19, starting at zero.
M.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
The same file was downloaded as excel and its 164 rows. When inputting this into Alteryx with delimiters blank, it gives about 33K rows to parse (see below). I believe there are more than 20 rows so I'll keep playing around based on your workflow.
Appreciate your help!!
