Skip to main content

Pentaho+ documentation has moved!

The new product documentation portal is here. Check it out now at docs.hitachivantara.com

 

Hitachi Vantara Lumada and Pentaho Documentation

HBase setup for Spark

Parent article

The HBase Input and HBase Output steps can run on Spark with the Adaptive Execution Layer (AEL). These steps can be used with the supported versions of Cloudera Distribution for Hadoop (CDH) and Hortonworks Data Platform (HDP). To read or write data to HBase, you must have an HBase target table on the cluster. If one does not exist, you can create one using HBase shell commands.

NoteDue to Cloudera limitations, the HBase Input step fails when using the specific configuration of Spark in YARN mode with Kerberos.

This article explains how you can set up the Pentaho Server to run these steps.

Set up the application properties file

You must set up the application.properties file to permit Spark jobs on AEL to access the hbase-site.xml file from the HDFS cluster. This setup enables Spark jobs to connect to HBase from the Spark Executors. You must also specify the location of the vendor-specific JARs described below so they can be loaded on the classpath.

Perform the following steps to set up the application.properties file:

Procedure

  1. Navigate to the design-tools/data-integration/adaptive-execution/config folder and open the application.properties file with any text editor.

  2. Set the value of the hbaseConfDir property to the location of your hbase-site.xml file.

  3. Set the value of the extraLib property to the location of the vendor-specific JARs.

    The default value is ./extra.
  4. Save and close the file.

Set up the vendor-specified JARs

Each vendor has differences in their byte conversion for HBase, so you must use the JAR files for the Hadoop distribution you are using.
NoteVendor-specific JARS for HBase are not shipped with Spark or HDFS.

Perform the following steps to set up the vendor-specific JARs:

Procedure

  1. Navigate to the design-tools/data-integration/adaptive-execution/extra directory and delete the three hbase JAR files.

  2. Navigate to the design-tools/data-integration/plugins/pentaho-big-data-plugin/hadoop-configurations directory and locate your CDH or HDP distribution folder.

  3. Locate the lib/pmr directory in your distribution folder.

  4. Copy the six hbase files, along with the metrics-core file to the design-tools/data-integration/adaptive-execution/extra folder.

  5. To complete your setup, you must restart the AEL daemon.