Manjunath Prabhu: I am a Software Engineer working at Intel as part of Intel Distribution for Apache Hadoop, Professional Services. I’ve been working in the Hadoop world for a four years on  and  MapReduce, Hbase Hive, Pig & Sqoop but have enjoyed every bit of it.

 

During the RDMS days, schema design was the backbone of every project in terms of performance, scalability as well as stability. In today’s NoSql world, designing an HBase table in itself is an important task.

 

In order to successfully design an HBase Table there are multiple factors that come into play:


  • Row key: This is one of the most important factors that should be taken into consideration while designing an HBase table. Most users depending on the use case promote use of static business keys as a row key. Row keys can be divided into the following types:


    1. Sequential key: They are used in the case where data is to be read sequentially from HBase. Let’s assume that the key is (country + userID + productID). Here we can do a partial scan on country in order to get products sold per country or (country + userID) to get all the products bought by users in that country.
    2. Random Key: Keys written sequentially might be slower in the case where all keys may go to one region server. This scenario is known as hot spotting where one region might get busy due to extra load while there isn’t any load on the others. To overcome this we use random keys where the load is balanced over different region servers. Random keys can be easily generated by hashing the key.
    3. Salted Key: In this case we select random keys from any range (say 1 to 10) and prefix it with salt which are then distributed into n buckets. This also overcomes the issue of hot spotting and at the same time we can make use of n scanners to get the data in parallel.

 

  • Number of column families: This is also an important factor that needs to be taken into consideration and should be kept as small as possible because of the following reasons:
    1. HBase stores data in HDFS at $HBASE_HOME/TableName/regionId/ColumnFamily/HFiles.


In the case where we might have multiple regions out of which some are sparsely populated while the others have multiple column families, it is possible that the region is split when it exceeds the size of an hdfs block. When this happens even the sparsely populated regions will be split and HBase will have to open multiple handlers to read multiple splits even though the data volume is small.

 

  • All the table configurations like block size compression, version etc can be set per column family. If we need to store and manage data in different ways we should split it into different column families.

 

  • 3. Block size: Over here we are not talking about hdfs block size but HFile block size which is by default set to 64kb. Each HFile has a byte offset marking its starting and ending position. Smaller the block size the better it is for random gets. Block size should be determine based on the use case, for example If the use case does random gets we should make the block size small but not too small so that one row spans over multiple blocks. Also there is an overhead involved in creating and maintain block indexes.  If the use case involves scans instead of gets then it is advisable to set it to zero such that we have as few as possible blocks.

 

  • Block Cache: This is what keeps HFile blocks in memory after they are loaded and is helpful when you know that you will be scanning/accessing rows from the same block multiple times. This is not very helpful in the case of random gets and is by default set to true.


  • Compression: Even though HFile file format stores data in a very smart manner and is much better in performance as compared to map-file format, we can still compress these files per column family. Most recommended compression algorithm is snappy because of its low compression and decompression time. There is also an option to use different compression algorithms for different column families in the same table.

 

  • Bloom filter: This helps in reducing I/O operations and is as simple as asking a question, whether a row or column exists in a file or not. The answer can be yes, no or maybe based on the meta-index maintained in the HFile. HBase will scan the file only in the case of yes or maybe and is helpful / useful in the case where you know the most commonly used filter columns or rows.

 

  • In memory: This should not be confused with block cache and is by default set to false. This tries to keep the mem-store in memory as long as possible. 

 

  • Time to live (TTL): This specifies the time after which a record is deleted and is by default set to forever. This should be changed if you want HBase to delete rows after certain period of the time. This is useful in the case when you store data in HBase which should be deleted after aggregation.

 

  • Min Version: As we know HBase maintains multiple version of every record as and when they are inserted. Min Version should always be used along with TTL. For example if the TTL is set to one year and there is no update on the row for that period the row will be deleted. If you don’t want the row to be deleted you would have to set a min version for it such that only the files below that version will be deleted.

 

  • Max version: This is the maximum number of versions of a row that will be stored in HBase and is by default set to three. This value is useful when you are updating the same row over and over again and do not care of older versions. If you are replacing the whole row every single time and do not need row histories at all you can set this value to one.

Here is simple example of table creation in HBase:

 

CREATE 'MANJU',

{NAME=>'BASIC', VERSIONS => '1', COMPRESSION => 'NONE', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'false'}

DESCRIBE 'MANJU'

ENABLE 'MANJU' 

 

To find out more about Hadoop @ Intel, please check out the Intel(r) Distribution for Apache Hadoop* (IDH) at hadoop.intel.com.

 

Manjunath