Currently Being Moderated

In the first of my series of Hadoop tutorials, I wanted to share an interesting case that arose when I was experiencing poor performance trying to do queries and computations on a set of XML Data. These computations could be mathematical as well as statistical for which the data needed to be ingested into a platform that could handle huge amounts of data and could be easily queried. The current tool used for processing this data was too expensive and slow because of which we needed to come up with a less expensive solution that was more cost effective.

 

Since we are already into the Hadoop world we decided to use either Hive or Pig. This would be cost effective as well as yield good performance since it would benefit from Hadoop’s distributed storage and processing. The end users were more comfortable with SQL so we decided to go with Hive. XML can be ingested directly into Hive using XPath but the problem arises when you have a few hundred fields for which you need to generate XPath tags. Even though XPath is an excellent way to read from a XML file the user still has to manually specify every tag that is to be read.

 

The solution was to have a piece of code that would go through a part of the XML file containing few records and spit out XPath tags. Each XML tag can have multiple tags within it for we had to loop into tags and maintain counters on the parent as well as child tags. In any XML file there are also some parent tags that do not hold any value but just encapsulate other child tags that hold values and need to be separated.

 

Here is a sample of the input XML file:

 

<policy><policyLimit>1000</policyLimit><versionNumber>1</versionNumber><customerNumber>12345</customerNumber><vehicleCoverage
<coverageLimit>/500</coverageLimit><coverageCode>ABC</coverageCode></vehicleCoverage></policy>



 

Here we have policy as the parent tag within which we have policyLimit .. vehicleCoverage. vehicleCoverage is a child to policy but parent to other tags such as coverageLimit .. coverageCode. We need to make sure that none of these parent tags show up in our final XPath list. We also need to specify the base node tag for our input XML which would be policy in this case.

 

Here is sample java code that iterates through the XML file and handles the following:

 

HashSet<String> unwantedParentTags = new HashSet<String>();
Hashtable<String, Integer> nodeCounters= new Hashtable<String, Integer>();
String currentNode = null;
Integer currentNodeCounter = 0;
DocumentBuilder db = DocumentBuilderFactory.newInstance().newDocumentBuilder();
org.w3c.dom.Document inFile = db.parse(new File(inputPath));
BufferedWriter outXPath = new BufferedWriter(new FileWriter(outputXPathTempPath));
//Level 1
NodeList base = inFile.getElementsByTagName("policy");
Node baseNode = base.item(0);
NodeList children = baseNode.getChildNodes();
for (int i = 0; i < children.getLength(); i++) {
    Node item = children.item(i);
    if (item.getNodeType() == Node.ELEMENT_NODE) {
        if(item.getPreviousSibling() == null)
        {  
            currentNode = "xpath_string(policyxml, '/" + baseNode.getNodeName() + "/" + item.getNodeName() + "') as " + item.getNodeName().toLowerCase();
            if(nodeCounters.containsKey(currentNode))
            {
                currentNodeCounter = nodeCounters.get(currentNode);
                nodeCounters.remove(currentNode);
                nodeCounters.put(currentNode, currentNodeCounter + 1);
                outXPath.write("xpath_string(policyxml, '/" + baseNode.getNodeName() + "[" + (currentNodeCounter + 1) + "]" + "/" + item.getNodeName() + "') as " + item.getNodeName().toLowerCase() + (currentNodeCounter + 1));
                outXPath.newLine();
            }
            else
            {
                nodeCounters.put(currentNode, 1);
                outXPath.write("xpath_string(policyxml, '/" + baseNode.getNodeName() + "[1]"  + "/" + item.getNodeName() + "') as " + item.getNodeName().toLowerCase() + "1");
                outXPath.newLine();
            }
        }
        else
        {
            if(!(item.getPreviousSibling().toString().equalsIgnoreCase("[" + item.getNodeName() + ": null]")))
            {
                currentNode = "xpath_string(policyxml, '/" + baseNode.getNodeName() + "/" + item.getNodeName() + "') as " + item.getNodeName().toLowerCase();
                if(nodeCounters.containsKey(currentNode))
                {
                    currentNodeCounter = nodeCounters.get(currentNode); System.out.println(currentNodeCounter+currentNode);
                    nodeCounters.remove(currentNode);
                    nodeCounters.put(currentNode, currentNodeCounter + 1);
                    outXPath.write("xpath_string(policyxml, '/" + baseNode.getNodeName() + "[" + (currentNodeCounter + 1) + "]" + "/" + item.getNodeName() + "') as " + item.getNodeName().toLowerCase() + (currentNodeCounter + 1));
                    outXPath.newLine();
                }
                else
                {
                    nodeCounters.put(currentNode, 1);
                    outXPath.write("xpath_string(policyxml, '/" + baseNode.getNodeName() + "[1]"  + "/" + item.getNodeName() + "') as " + item.getNodeName().toLowerCase() + "1");
                    outXPath.newLine();
                }
            }
            else
            {
                if(!unwantedParentTags.contains(baseNode.getNodeName() + "/" + item.getNodeName()))
                {
                    unwantedParentTags.add(baseNode.getNodeName() + "/" + item.getNodeName());
                    currentNode = "xpath_string(policyxml, '/" + baseNode.getNodeName() + "/" + item.getNodeName() + "') as " + item.getNodeName().toLowerCase();
                    nodeCounters.remove(currentNode);
                }
            }
        }

 

  1. Counter on parent and child tags
  2. List of unwanted parent tags
  3. Fully qualified XPath’s in the format basenode/childnode

 

The code shown above only works at the first level / loop while iterating through the XML and can be modified to recursively loop through each level. Once we have a complete list of XPath expressions as well as unwanted parent tags, the next step is to iterate through the entire tag list and remove unwanted parent tags. Now we have our final list of XPath tags the only step left is to write the Hive Script to read from our XML file. We must pre-process the raw XML into a set of Hive friendly newline terminated XML records, cleansing embedded newlines and other formatting. The source XML in our case contains formatting whitespace and newlines for readability. We delete literal ampersands as well as remove all whitespace and newlines, then insert a newline at the end of each record level tag.

 

cat <Input File Name>.xml | tr -d '&' | tr '\n' ' ' | tr '\r' ' ' | sed 's|</policy>|</policy>\n|g' | grep -v '^\s*$' > <Output File Name>.xmlb

 

Once we have the list of XPath’s and a processed XML file` we can write the hive script having the following parts

 

  1. Create a hive table on top of the XML flat file. Here the table records consist of a single string column that contains the XML text for a single XML record.
  2. Create a view by reading from the above created table incorporating all the needed XPath expressions.
  3. Create a table of persisted data, since views may have performance issues.

 

Here is a sample of the hive script:

 

CREATE EXTERNAL TABLE xpath_table (policyxml string)
STORED AS TEXTFILE
LOCATION '/hdfs/path/to/flat/processed/file’;
CREATE VIEW xpath_view(policylimit, coveragecode)
AS SELECT
xpath_string(policyxml, '/policy[1]/ policyLimit '),
xpath_string(policyxml, '/policy[1]/vehicleCoverage[1]/coverageCode')
FROM xpath_table;
CREATE TABLE xpath_table_final AS SELECT * FROM xpath_view;
SELECT * FROM xpath_table_final WHERE coveragecode = ‘ABC’

 

 

Also, retaining the original source XML allows us to create specific XPath views to fulfill different requirements. We ran multiple queries on the resultant table which included joins, averages, sums etc and it gave us the desired output. The java code to generate XPath tags takes only a few seconds since it works on a very small set but the table creation all depends on the cluster configuration.

 


Chandeep Singh is a Software Engineer working at Intel as part of Intel Distribution for Apache Hadoop, Professional Services. I’ve been working in the Hadoop world for a while and got my hands dirty with MapReduce, Hive, Pig & Sqoop but have enjoyed every bit of it.

Comments

Filter Blog

By author:
By date:
By tag: