Using the HBase Input step on the Spark engine
You can set up the HBase Input step to run on the Spark engine. Spark processes null values differently than the Pentaho engine, so you may need to adjust your transformation to process null values following Spark's processing rules.
Before using the HBase Input step on the Spark engine, you must set up the application.properties file and vendor-specific JARs. See HBase setup for Spark for instructions.
Enter the following information in the transformation step name field.
- Step name: Specifies the unique name of the HBase Input step on the canvas. You can customize the name or leave it as the default.
The HBase Input step features several tabs with fields. Each tab is described below.
Configure query tab
Before a value can be read from HBase, you must specify the type and column family of the value, and the type of the table key. You must define a mapping to use a source table. You can output some or all of the fields defined in the mapping. Rows from the table may be deleted to select a subset of the fields. Clearing all rows from the table indicates that all fields defined in the mapping should be output.
This tab contains connection details and basic query information. You can configure a connection by using the Hadoop cluster properties, or by using an hbase-site.xmland (an optional) hbase-default.xml configuration file.
This tab includes the following fields:
Click the Hadoop Cluster drop-down menu to select an existing Hadoop cluster configuration.
|URL to hbase-site.xml||Specify the address of the hbase-site.xml file by entering its path or clicking Browse.|
|URL to hbase-default.xml||Specify the address of the hbase-default.xml file by entering its path or clicking Browse.|
|HBase table name||The name of the source HBase table you want to read. Click Get mapped table names to populate the drop-down list of available table names.|
|Mapping name||A mapping you can use to decode and interpret column values. Click Get mappings for the specified table to populate the drop-down list of available mappings.|
|Store mapping info in step meta data||
Select this option when using the Spark engine to store mapping information in the step's metadata instead of loading it from HBase at runtime.
NoteThis option must be selected for the HBase Input step to function correctly with the Spark engine.
|Start key value (inclusive) for table scan||Specifies the starting key value of a partial scan, including the value entered.|
|Stop key value (exclusing) for table scan||Specifies the stopping key value of a partial scan, excluding the value entered. The start key and stop key fields may be left blank. If the stop key field is left blank, then all rows beginning with and including the start key will be returned.|
|Scanner row cache size||The number of rows to cache each time a fetch request is made. See the Performance considerations section below for more information.|
Key fields table
This table displays the metadata for the selected table.
|#||The order of query limitation fields.|
|Alias||The name that the field will be given in the output stream.|
|Key||Indicates whether a field is the table's key field or not.|
|Column family||The column family in the HBase source table that the field belongs to.|
|Column name||The name of the column in the HBase table. The column family plus the column name uniquely identifies a column in an HBase table.|
|Type||The PDI data type for the field.|
Applies a formatting mask to the field. A formatting string must be provided for date values involved in a range scan (and optionally for numbers). There are two ways to provide this information in the dialog box:
|Indexed values||An optional set of values you can define for string columns by entering comma-separated data in this field.|
|Get Key/Fields Info||Populates the field list and displays the name of the key as defined in the mapping when the connection information is complete and valid.|
Create/Edit mappings tab
Use the fields on this tab to create or edit mappings for an HBase table. The mapping defines metadata about the values that are stored in the table. Since data is stored as raw bytes in HBase, PDI can decode values and execute comparisons for column-based result set filtering. The fields area of the tab is used to enter information about the columns in the HBase table that the user wants to map. Selecting the name of an existing mapping loads the fields defined in that mapping into the fields area of the display.
A valid mapping must define metadata for the key of the source HBase table. The key must have a value specified in the Alias column because a name is not given to the key of an HBase table. Non-key columns must specify the Column family and the Column name that they belong to. Non-key columns can have an optional alias; if one is not supplied, then the column name is used as an alias. All fields must have type information supplied.
This tab includes the following fields:
|HBase table name||Displays a list of table names. Connection information in the previous tab must be valid and complete for this drop-down list to populate. Selecting a table here populates the Mapping name drop-down box with the names of available mappings for that table. Click Get table names to retrieve a list of existing table names.|
Names of any mappings that exist for the table. This box is empty when there are no mappings defined for the selected table.
NoteYou can define multiple mappings on the same HBase table using different subsets of columns.
Use these fields to specify values for the fields.
|#||The order of the mapping operation.|
|Alias||The name you want to assign to the HBase table key. This value is required for the table key column, but optional for non-key columns.|
|Key||Indicates whether the field is the table's key. The values are Y and N.|
|Column family||The column family in the HBase source table that the field belongs to. Non-key columns must specify a column family and column name.|
|Column name||The name of the column in the HBase table.|
|Type||Data type of the column. When the key value is set to
Y, the following key column values display in the drop-down
Key column types are:
When the key value is set to N, the following key column values display in the drop-down list:
Non-key columns types are:
|Indexed values||Enter comma-separated data in this field to define values for string columns.|
|Save mapping (button)||Saves the mapping. If there is any missing information in the mapping definition, you will be prompted to correct the mapping definition before the mapping is saved.|
|Delete mapping (button)||Deletes the current named mapping in the current named table from the mapping table. Note that this does not delete the actual HBase table.|
|Create a tuple template (button)||Select to create a mapping template to extract tuples from HBase.|
Additional notes on data types
For keys to sort properly in HBase, you must note the distinction between signed and unsigned numbers. Because of the way that HBase stores integer and long data internally, the sign bit must be flipped before storing the signed number so that positive numbers will sort after negative numbers. Unsigned integer and unsigned long data can be stored directly without inverting the sign.
May optionally have a set of legal values defined for them by entering comma-separated data into the Indexed values column in the fields table.
Can be stored as either signed or unsigned long data types, with epoch-based timestamps. If you have a date key mapped as a String type, PDI can change the type to Date for manipulation in the transformation. No distinction is made between signed and unsigned numbers for the Date type because HBase only sorts on the key.
May be stored in HBase as 0/1 integer/long or as strings (Y/N, yes/no, true/false, T/F).
May be stored as either a serialized BigDecimal object or in string form (that is, a string that can be parsed by BigDecimal's constructor).
Any serialized Java object.
A raw array of bytes.
Filter result set tab
These fields are not used by the Spark engine.
Metadata injection support
All fields of this step support metadata injection except for the Hadoop Cluster field. You can use this step with ETL metadata injection to pass metadata to your transformation at runtime.