This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
This blog post is referring to functionality that will be in our next major release. Beta users can preview this feature in the latest Beta release of Alteryx. Please send an email to email@example.com if you are interested in participating in our Beta program.
Reading XML files has probably been the single most requested Alteryx feature that we have never done. The issue is that XML is inherently not a data table format. XML describes hierarchical data which is inherently incompatible with the way Alteryx streams data as records. Other tools (like Excel) require either an XML Schema or that the user has an extensive knowledge of XPath, which in many ways is as complicated as Regular Expressions. The specialized XML knowhow are not skills that a typical Alteryx user can be expected to have. We wanted to come up with a way that reading XML would be intuitive to an Alteryx user.
You probably want to parse out the<entry> tags (there are typically more than 1) and their information, but notice the <link tags inside? The only way to make sense of this is with a relational table structure consisting of 2 tables (streams.)
Again, trying to make reading XML as simple as possible, there are only 3 configuration questions:
XML Element: This is the element we want to read from the XML file. If left blank, Alteryx will guess and often get it right.
Return Child Values: If true, then fields will be created based on all children 1 level deep from the XML Element
Return Outer XML: If true, then the Outer XML of the XML Element will be returned as a string field. Also if Return Child Values it will separately return the Outer XML of child elements. This is useful for further parsing.
Reading the atom XML above, with XML Element=="entry" and both Return Child Values and Return Outer XML set to true, we get the following table (some fields have been deselected):
Atom-Powered Robots Run Amok
<entry><title>Atom-Powered Robots Run Amok</title>...
With very little configuration, we have quickly parsed out the summary information we need from the Atom feed. Using the companion XML Parse tool, which can be found in the developer tools toolbox, we can quickly parse out the link tags and get another table of info about them (again, extra fields have been removed):
A very simple module, with just a few tools was able to parse out this data very effectively. The good news is that it is very adaptable if future files have different children or different #'s of elements, it should all just work.
Performance & size considerations
The other thing that we wanted to make sure of was that there were no size limitations and that the speed was pretty good. This was another thing that ruled out using any sort of XPath. Since XPath allows for backwards and forward references within the file, it basically requires the entire file to be in memory.
We used a parser that looks at the data as it is parsing (a SAX Parser) and handles each piece without ever needing the entire document in memory. Since we are automatically finding out what fields (and field sizes) to create based on the content, this ends up requiring 2 passes through the data. One to figure out what set of fields to return and another to return the actual data. Since this style of parsing is so efficient, it is still very fast.
For texting purposes though, we wanted to find some very large XML files:
A search online found some samples of large XML files:
Default configuration found the top level of relevant data. There is a sub level of data containing lists of links for each. A second level parse to pull out all the sub links as well makes the parse time 1 minutes 9 seconds.