Skip to main content

Pentaho+ documentation has moved!

The new product documentation portal is here. Check it out now at docs.hitachivantara.com

 

Hitachi Vantara Lumada and Pentaho Documentation

Using Parquet Input on the Pentaho engine

Parent article

If you are running your transformation on the Pentaho engine, use the following instructions to set up the Parquet Input step.

General

The following fields are general to this transformation step:

FieldDescription
Step nameSpecify the unique name of the Parquet input step on the canvas. You can customize the name or use the provided default.
Folder/File nameSpecify the fully qualified URL of the source file or folder name for the input fields. Click Browse to display the Open File window and navigate to the file or folder. For the supported file system types, see Connecting to Virtual File Systems. The Pentaho engine reads a single Parquet file as an input.
Ignore empty folderSelect to allow the transformation to proceed when the specified source file is not found in the designated location. If not selected, the specified source file is required in the location for the transformation to proceed.

Fields

The Fields section contains the following items:

Parquet input step
  • The Pass through fields from the previous step option reads the fields from the input file without redefining any of the fields.
  • The table defines the data about the columns to read from the Parquet file.

The table in the Fields section defines the fields to read as input from the Parquet file, the associated PDI field name, and the data type of the field.

Enter the information for the Parquet input step fields, as shown in the following table:

FieldDescription
PathSpecify the name of the field as it will appear in the Parquet data file or files, and the Parquet data type.
NameSpecify the name of the input field.
TypeSpecify the type of the input field.
FormatSpecify the date format when the Type specified is Date.

Provide a path to a Parquet data file and click Get Fields. When the fields are retrieved, the Parquet type is converted to an appropriate PDI type, as shown in the table below. You can preview the data in the Parquet file by clicking Preview. You can change the Type by using the Type drop-down or by entering the type manually.

NoteGet Fields does not support partitioned Parquet datasets. See Using Get Fields with Parquet partitioned datasets to use Spark on AEL as a tool to generate partitioned fields in this table.

PDI types

The Parquet to PDI data type values are as shown in the table below:

Parquet TypePDI Type
ByteArrayBinary
BooleanBoolean
DoubleNumber
FloatNumber
FixedLengthByteArrayBinary
DecimalBigNumber
DateDate
EnumString
Int8Integer
Int16Integer
Int32Integer
Int64Integer
Int96Timestamp
UInt8Integer
UInt16Integer
UInt32Integer
UInt64Integer
UTF8String
TimeMillisTimestamp
TimestampMillisTimestamp

Metadata injection support

All fields of this step support metadata injection. You can use this step with ETL metadata injection to pass metadata to your transformation at runtime.