Skip to main content

Pentaho+ documentation has moved!

The new product documentation portal is here. Check it out now at docs.hitachivantara.com

 

Hitachi Vantara Lumada and Pentaho Documentation

PDI and Lumada Data Catalog

Parent article

If you are a Lumada Data Catalog user, you can now work with your Data Catalog metadata and data resources using PDI transformations.

Lumada Data Catalog lets data engineers, data scientists, and business users accelerate metadata discovery and data categorization, and permits data stewards to manage sensitive data. Data Catalog collects metadata for various types of data assets and points to the asset's location in storage. Data assets registered in Data Catalog are known as data resources.

For example, you might create a PDI transformation that reads the location of a data resource from Data Catalog, retrieves the data and transforms it, and then writes a data file back to the cluster as a new or existing file. You can then register that file’s location in Data Catalog as a new data resource along with descriptive metadata tags to describe the transformed contents of the file.

There are four PDI steps for building PDI transformations that work with Data Catalog metadata and data resources:

  • Read Metadata

    This step provides a way to search Data Catalog’s existing metadata for specific data resources, including their storage location. The metadata associated with an identified data resource can then be passed along to another step within a PDI transformation.

  • Write Metadata

    With this step, you can revise the existing Data Catalog tags associated with an existing data resource. In a transformation that includes the Catalog Output step, you can also create the metadata for a new data resource that you created and registered in Data Catalog.

  • Catalog Input

    This step reads the CSV text file types or Parquet data formats of a Data Catalog data resource that is stored in a Hadoop or S3 ecosystem and outputs the data payload in the form of rows to use in a transformation. You can also use Catalog Input with the Catalog Output step to gather data from Data Catalog data resources and move that data into Hadoop or S3 storage.

  • Catalog Output

    This step encodes CSV text file types or Parquet data formats using the schema defined in PDI to create a new data resource or to replace or update an existing data resource in Data Catalog. Metadata can be added. The data is saved in the selected Hadoop or S3 ecosystem and registered as a data resource in Data Catalog.

All four steps support the Pentaho engine. Neither the Pentaho Adaptive Execution Layer (AEL) nor metadata injection (MDI) are currently supported in the steps.

Prerequisites

These steps require VFS connections.

To use the Read Metadata or Write Metadata steps:

  • Set up a VFS connection to a stand-alone instance of Data Catalog and provide your role access credentials. For more information, see Access to Lumada Data Catalog.

To use the Catalog Input and Catalog output steps:

  • Set up a VFS connection to a stand-alone instance of Data Catalog and provide your role access credentials. For more information see Access to Lumada Data Catalog.
  • Configure S3 as the Default S3 Connection in VFS Connections to access S3 storage. For details, see Connecting to Virtual File Systems.
  • You must have an established PDI connection to the cluster(s) you plan on using. For example, a Hadoop driver must be configured as a named connection for your distribution for accessing HDFS. For information on named connections, see Connecting to a Hadoop cluster with the PDI client.

Supported Filetypes

Use the Read Metadata and Write Metadata steps to modify the Data Catalog metadata for any resource in the catalog, regardless of the file format.

The Catalog Input and Catalog Output steps currently support retrieving and performing ETL data transformations for these file types, stored on HDFS or Amazon S3:

  • CSV files
  • Parquet files