Skip to main content

Pentaho+ documentation has moved!

The new product documentation portal is here. Check it out now at


Hitachi Vantara Lumada and Pentaho Documentation

Using the ORC Input step on the Spark engine

Parent article

You can set up the ORC Input step to run on the Spark engine. Spark processes null values differently than the Pentaho engine, so you may need to adjust your transformation to successfully process null values according to Spark's processing rules.

Because of Cloudera Distribution Spark (CDS) limitations, the step does not support AEL for reading Hive tables containing data files in the ORC format from Spark applications in YARN mode. As an alternative, you can use the Parquet data format for columnar data using Impala.


Enter the following information in the ORC Input step fields:

Step nameSpecify the unique name of the ORC Input step on the canvas. You can customize the name or use the provided default.
Folder/File NameSpecify the fully qualified URL of the source file or folder name for the input fields. Click Browse to display the Open File window and navigate to the file or folder. For the supported file system types, see Connecting to Virtual File Systems. The Spark engine reads all the ORC files in a specified folder as inputs.


The Fields section contains the following items:

  • A Pass through fields from the previous step option that allows you to read the fields from the input file without redefining any of the fields.
  • A table defining data about the columns to read from the ORC file.
ORC Input step

The table in the Fields section defines the fields to read as input from the ORC file, the associated PDI field name, and the data type of the field. Enter the information for the ORC Input step fields as shown in the following table:

ORC path (ORC type)Specify the name of the field as it will appear in the ORC data file or files, and the ORC data type.
NameSpecify the name of the input field.
TypeSpecify the data type of the input field.
FormatSpecify the date format when the Type specified is Date.

You can define the fields manually, or you can provide a path to an ORC data file and click Get Fields to populate all the fields. When the fields are retrieved, the ORC type is converted into an appropriate PDI type. You can preview the data in the ORC file by clicking Preview. You can change the PDI type by using the Type drop-down or by entering the type manually.

AEL types

In AEL, the ORC step automatically converts ORC rows to Spark SQL rows. The following table lists the conversion types.

ORC TypeSpark SQL Type

Metadata injection support

All fields of this step support metadata injection. You can use this step with ETL metadata injection to pass metadata to your transformation at runtime.