Skip to main content

Pentaho+ documentation is moving!

The new product documentation portal is here. Check it out now at


Hitachi Vantara Lumada and Pentaho Documentation

Avro Output

Parent article

The Avro output step serializes data into an Avro binary or JSON format from the PDI data stream, then writes it to file. Apache Avro is a data serialization system. Avro relies on schema for decoding binary and extracting data.

This output step creates the following files:

  • A file containing output data in the Avro format
  • An Avro schema file defined by the fields in this step

Fields can be defined manually or extracted from incoming steps.

AEL Considerations

When using the Avro Output step with the Adaptive Execution Layer, the following factor affects performance and results:

  • Spark processes null values differently than the Pentaho engine. You will need to adjust your transformation to successfully process null values according to Spark's processing rules.


Enter the following information in the transformation step fields:

Step nameSpecifies the unique name of the Avro Output step on the canvas. You can customize the name or leave it as the default.
LocationIndicates the file system type or specific cluster on which the item you want to output can be found. For the supported file system types, see Using the virtual file system browser in PDI.
Folder/File name

Specifies the location and/or name of the file or folder to which to write. Click Browse to display the Open File window and navigate to the file or folder.

  • When running on the Pentaho engine, the Avro files are created.
  • When running on the Spark engine, a folder is created with Avro files.
Overwrite existing output fileSelect to overwrite an existing file that has the same file name and extension.


The Avro Output transformation step features several tabs with fields. Each tab is described below.

Fields tab

Avro Output Fields tab
NoteThe table in the Fields tab defines the following fields that make up the Avro schema created by this step:
Avro pathThe name of the field as it will appear in the Avro data and schema files.
NameThe name of the PDI field.
Avro typeDefines the Avro data type of the field.
PrecisionApplies only to the Decimal Avro type, the total number of digits in the number. The default is 10.
ScaleApplies only to the Decimal Avro type, the number of digits after the decimal point. The default is 0.
Default valueThe default value of the field if it is null or empty.
NullSpecifies if the field can contain null values.
NoteTo avoid a transformation failure, make sure the Default value field contains values for all fields where Null is set to No.
NoteAs shown in the table below, you can click Get Fields to populate the fields from the incoming PDI stream or these fields can be defined manually. During the retrieval of fields, a PDI type is converted to an appropriate Avro type. If desired, you can change the converted field type to another Avro type.
PDI TypeAvro Type (non AEL)Avro Type (AEL)

Not supported

NoteGet Fields provides field conversion from BigNumber to Decimal. However, Decimal types are not supported when running a transformation in AEL, so you must convert the field to another appropriate Avro type.

Schema tab

Avro Output Schema tab

The following options in the Schema tab define how the Avro schema file will be created:

File nameSpecifies the fully qualified URL where the Avro schema file will be written. The URL may be in a different format depending on file system type (Location field). If a schema file already exists, it will be overwritten. If you do not specify a separate schema file for your output, PDI will write an embedded schema in your Avro data file.
NamespaceSpecifies the name, together with the Record name field, that defines the "full name" of the schema (example.avro for example).
Record nameSpecifies the name of the Avro record (User for example).
Doc valueSpecifies the documentation provided for the schema.

Options tab

Avro Output Step Options tab

Specifies which of the following codecs is used to compress data blocks in the Avro output file:

  • None: No compression is used (default).
  • Deflate: The data blocks are written using the deflate algorithm as specified in RFC 1951, and typically implemented using the zlib library.
  • Snappy: The data blocks are written using Google's Snappy compression library, and are followed by the 4-byte, big-endian CRC32 checksum of the uncompressed data in each block.

See for additional information on these codecs.

Include date in filenameAdd the system date that the file was generated to the output file name with the default format yyyyMMdd (20181231 for example).
Include time in filenameAdd the system time that the file was generated to the output file name with the default format HHmmss (235959 for example).
Specify date time formatAdd a different date time format to the output file name from the options available in the drop-down list.

Metadata injection support

All fields of this step support metadata injection. You can use this step with ETL metadata injection to pass metadata to your transformation at runtime.