Pages

45 [Latest] Apache HBase Job Interview Questions and Answers PDF


Real Time Apache HBase Interview Questions and Answers PDF

•    Explain What Is Hbase?
Hbase is a column-oriented database management system which runs on top of HDFS (Hadoop Distribute File System). Hbase is not a relational data store, and it does not support structured query language like SQL.
In Hbase, a master node regulates the cluster and region servers to store portions of the tables and operates the work on the data.

•    What Are The Different Commands Used In Hbase Operations?
There are 5 atomic commands which carry out different operations by Hbase.Get, Put, Delete, Scan and Increment.
Hirred Apache HBase Interview Questions And Answers

•    Explain Why To Use Hbase?
o    High capacity storage system
o    Distributed design to cater large tables
o    Column-Oriented Stores
o    Horizontally Scalable
o    High performance & Availability
o    Base goal of Hbase is millions of columns, thousands of versions and billions of rows
o    Unlike HDFS (Hadoop Distribute File System), it supports random real time CRUD operations

•    How To Connect To Hbase?
A connection to Hbase is established through Hbase Shell which is a Java API.

•    Mention What Are The Key Components Of Hbase?
Zookeeper: It does the co-ordination work between client and Hbase Maser
1.    Hbase Master: Hbase Master monitors the Region Server
2.    RegionServer: RegionServer monitors the Region
3.    Region: It contains in memory data store(MemStore) and Hfile.
4.    Catalog Tables: Catalog tables consist of ROOT and META

•    What Is The Role Of Master Server In Hbase?
The Master server assigns regions to region servers and handles load balancing in the cluster.

•    Explain What Does Hbase Consists Of?
o    Hbase consists of a set of tables
o    And each table contains rows and columns like traditional database
o    Each table must contain an element defined as a Primary Key
o    Hbase column denotes an attribute of an object

•    What Is The Role Of Zookeeper In Hbase?
The zookeeper maintains configuration information, provides distributed synchronization, and also maintains the communication between clients and region servers.

•    Mention How Many Operational Commands In Hbase?
Operational command in Hbases is about five types:
o    Get
o    Put
o    Delete
o    Scan
o    Increment

•    When Do We Need To Disable A Table In Hbase?
In Hbase a table is disabled to allow it to be modified or change its settings. .When a table is disabled it cannot be accessed through the scan command.

•    Explain What Is Wal And Hlog In Hbase?
WAL (Write Ahead Log) is similar to MySQL BIN log; it records all the changes occur in data. It is a standard sequence file by Hadoop and it stores HLogkey’s. These keys consist of a sequential number as well as actual data and are used to replay not yet persisted data after a server crash. So, in cash of server failure WAL work as a life-line and retrieves the lost data’s.

•    What Are The Different Types Of Filters Used In Hbase?
Filters are used to get specific data form a Hbase table rather than all the records.
They are of the following types.
o    Column Value Filter
o    Column Value comparators
o    KeyValue Metadata filters.
o    RowKey filters.

•    In Hbase What Is Column Families?
Column families comprise the basic unit of physical storage in Hbase to which features like compressions are applied.

•    Name Three Disadvantages Hbase Has As Compared To Rdbms?
0.    Hbase does not have in-built authentication/permission mechanism
1.    The indexes can be created only on a key column, but in RDBMS it can be done in any column.
2.    With one HMaster node there is a single point of failure.

•    Explain What Is The Row Key?
Row key is defined by the application. As the combined key is pre-fixed by the rowkey, it enables the application to define the desired sort order. It also allows logical grouping of cells and make sure that all cells with the same rowkey are co-located on the same server.

•    Is Hbase A Scale Out Or Scale Up Process?
Hbase runs on top of Hadoop which is a distributed system. Haddop can only scale uo as and when required by adding more machines on the fly. So Hbase is a scale out process.

•    Explain Deletion In Hbase? Mention What Are The Three Types Of Tombstone Markers In Hbase?
When you delete the cell in Hbase, the data is not actually deleted but a tombstone marker is set, making the deleted cells invisible. Hbase deleted are actually removed during compactions.
Three types of tombstone markers are there:
o    Version delete marker: For deletion, it marks a single version of a column
o    Column delete marker: For deletion, it marks all the versions of a column
o    Family delete marker: For deletion, it marks of all column for a column family

•    What Are The Step In Writing Something Into Hbase By A Client?
In Hbase the client does not write directly into the HFile. The client first writes to WAL(Write Access Log), which then is accessed by Memdtore. The Memstore Flushes the data into permanent memory from time to time.

•    Explain How Does Hbase Actually Delete A Row?
In Hbase, whatever you write will be stored from RAM to disk, these disk writes are immutable barring compaction. During deletion process in Hbase, major compaction process delete marker while minor compactions don’t. In normal deletes, it results in a delete tombstone marker- these delete data they represent are removed during compaction.
Also, if you delete data and add more data, but with an earlier timestamp than the tombstone timestamp, further Gets may be masked by the delete/tombstone marker and hence you will not receive the inserted value until after the major compaction.

•    What Is Compaction In Hbase?
As more and more data is written to Hbase, many HFiles get created. Compaction is the process of merging these HFiles to one file and after the merged file is created successfully, discard the old file.

•    Explain What Happens If You Alter The Block Size Of A Column Family On An Already Occupied Database?
When you alter the block size of the column family, the new data occupies the new block size while the old data remains within the old block size. During data compaction, old data will take the new block size. New files as they are flushed, have a new block size whereas existing data will continue to be read correctly. All data should be transformed to the new block size, after the next major compaction.

•    What Are The Different Compaction Types In Hbase?
There are two types of compaction. Major and Minor compaction. In minor compaction, the adjacent small HFiles are merged to create a single HFile without removing the deleted HFiles. Files to be merged are chosen randomly.
In Major compaction, all the HFiles of a column are emerged and a single HFiles is created. The delted HFiles are discarded and it is generally triggered manually.

•    What Is A Cell In Hbase?
A cell in Hbase is the smallest unit of a Hbase table which holds a piece of data in the form of a tuple{row,column,version}

•    What Is The Scope Of A Rowkey In Hbase?
Rowkeys are scoped to ColumnFamilies. The same rowkey could exist in each ColumnFamily that exists in a table without collision.

•    What Is The Role Of The Class Hcolumndescriptor In Hbase?
This class is used to store information about a column family such as the number of versions, compression settings, etc. It is used as input when creating a table or adding a column.

•    What Is A Namespace In Hbase?
A Namespace is a logical grouping of tables . It is similar to a database object in a Relational database system.

•    What Is The Lower Bound Of Versions In Hbase?
The lower bound of versions indicates the minimum number of versions to be stored in Hbase for a column. For example If the value is set to 3 then three latest version wil be maintained and the older ones will be removed.

•    What Is Hotspotting In Hbase?
Hotspotting is a situation when a large amount of client traffic is directed at one node, or only a few nodes, of a cluster. This traffic may represent reads, writes, or other operations. This traffic overwhelms the single machine responsible for hosting that region, causing performance degradation and potentially leading to region unavailability.

•    What Is Ttl (time To Live) In Hbase?
TTL is a data retention technique using which the version of a cell can be preserved till a specific time period.Once that timestamp is reached the specific version will be removed.

•    Why Do We Pre-create Empty Regions?
Tables in HBase are initially created with one region by default. Then for bulk imports, all clients will write to the same region until it is large enough to split and become distributed across the cluster. So empty regions are created to make this process faster.

•    Does Hbase Support Table Joins?
Hbase does not support table joins. But using a mapreduce job we can specify join queries to retrieve data from multiple Hbase tables.

•    Which File In Hbase Is Designed After The Sstable File Of Bigtable?
The HFile in Habse which stores the Actual data(not metadata) is designed after the SSTable file of BigTable.

•    What Is A Hbase Store?
A Habse Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region.

•    What Are The Two Types Of Table Design Approach In Hbase?
They are:
0.    Short and Wide
1.    Tall and Thin

•    When Do We Do Manual Region Splitting?
The manual region splitting is done we have an unexpected hotspot in your table because of many clients querying the same table.

•    In Which Scenario Should We Consider Creating A Short And Wide Hbase Table?
The short and wide table design is considered when there is
o    There is a small number of columns
o    There is a large number of rows

•    In Hbase What Is Log Splitting?
When a region is edited, the edits in the WAL file which belong to that region need to be replayed. Therefore, edits in the WAL file must be grouped by region so that particular sets can be replayed to regenerate the data in a particular region. The process of grouping the WAL edits by region is called log splitting.

•    How Does Hbase Support Bulk Data Loading?
There are two main steps to do a data bulk load in Hbase:
0.    Generate Hbase data file(StoreFile) using a custom mapreduce job) from the data source. The StoreFile is created in Hbase internal format which can be efficiently loaded.
1.    The prepared file is imported using another tool like comletebulkload to import data into a running cluster. Each file gets loaded to one specific region.

•    Why Multiwal Is Needed?
With a single WAL per RegionServer, the RegionServer must write to the WAL serially, because HDFS files must be sequential. This causes the WAL to be a performance bottleneck.

•    How Does Hbase Provide High Availability?
Hbase uses a feature called region replication. In this feature for each region of a table, there will be multiple replicas that are opened in different RegionServers. The Load Balancer ensures that the region replicas are not co-hosted in the same region servers.

•    How Does Wal Help When A Regionserver Crashes?
The Write Ahead Log (WAL) records all changes to data in HBase, to file-based storage. if a RegionServer crashes or becomes unavailable before the MemStore is flushed, the WAL ensures that the changes to the data can be replayed.

•    What Is Hregionserver In Hbase?
HRegionServer is the RegionServer implementation. It is responsible for serving and managing regions. In a distributed cluster, a RegionServer runs on a DataNode.

•    What Are The Different Block Caches In Hbase?
HBase provides two different BlockCache implementations: the default on-heap LruBlockCache and the BucketCache, which is (usually) off-heap.

Latest Apache HBase Interview Questions for freshers and Experienced pdf

No comments:

Post a Comment