I preserved the original [URL] for reference but added the new one in with a label to differentiate. Having done that, I needed to FILTER to choose the right one.

Next, I jump into the DOWNLOAD and JSON-PARSE and leverage the provided [URL] field:

JSON data differentiates records and columns numerically, so plucking those values out of the JSON_Name string can be done in a few different ways, but I simply replaced the ‘datatable.data.’ string leaving me with the #.# and then used TEXT-TO-COLUMNS to split at the period.

Next up, was grabbing only the data fields, found to be JSON ID values of 0-4. A simple CROSS-TAB later and the data is now in a useable format. The lower stream is isolating the Field Name values and using those to positionally rename the fields from the actual values.

In thinking about the renaming, I had the [record] field to contend with. It’s in the upper stream and I could have used a formula and added it in or added union with a placeholder value for the lower stream to match, but by moving the [record] field to the end, past the *Unknown fields; it’ll still work in the DYNAMIC-RENAME. It’ll throw a warning, saying there is not enough values to fully rename (which is on purpose) and then simply retain the original name.
Clever? Sure. Best practice? Maybe not