The Lumada Data Catalog software builds a metadata catalog from data assets residing in tools and databases. It profiles the data assets to produce field-level data quality statistics and to identify representative data so users can efficiently analyze the content and quality of the data.
Data Catalog requires specific external components and applications to operate optimally. This article provides a list of those components and applications along with details of their use and the versions Data Catalog supports. If you have questions about your particular computing environment, contact Hitachi Vantara Lumada and Pentaho Support.
You need a Kubernetes cluster to set up the multiple Data Catalog components. The following table lists the requirements for Kubernetes installation.
|Minimum hardware requirements
|Kubernetes version 1.23
|Software for cluster
For installation instructions, see Installation on Kubernetes.
Data Catalog supports major versions of web browsers that are publicly available.
|Google Chrome* (Recommended)
|105.0.5195.102 (Official Build) (x86_64)
* Backward version compatibility depends on the changes made in browser libraries. Contact Hitachi Vantara Lumada and Pentaho Support for any specific version compatibility.
For Data Catalog, Keycloak 18.0.2 is the default identity and access management tool, which allows the creation of a user database with custom roles and groups. It is installed during the Helm deployment of Data Catalog.
For Data Catalog, Keycloak 20.0.1 is the default identity and access management tool, which allows the creation of a user database with custom roles and groups. It is installed during the Helm deployment of Data Catalog.
Port and firewall requirements
If the Data Catalog users, including service users, need to access Data Catalog across a firewall, you should allow access to the following ports at the cluster IP address.
|Secure Data Catalog browser application (HTTPS)
|Grants users access to the Data Catalog application
|Authentication and access management
|Manages user authentication and access through Keycloak
|Metadata repository (MongoDB)
|Stores metadata collected from processing functions in a MongoDB repository
|Object storage (MinIO)
|Serves as object storage for debugging purposes
|Metadata repository REST API endpoint (HTTP)
|Used internally by the Lumada Data Catalog Application Server and agent components to communicate with the repository
Data Catalog can process data from different data sources. The following table lists the different types of supported data sources.
|12, 19c, 21c
|3.13 (as per JDBC jar)
|Hitachi Content Platform (HCP)
|Azure Data Lake Storage (ADLS)
|Hadoop Distribution File System (HDFS)
|3.2.1 (for EMR)
|3.1.1 (for CDP and HDP)
|3.1.3 (for EMR and CDP)
|3.1.0 (for HDP)
A Lumada Data Catalog Agent is responsible for initiating, executing, and monitoring jobs that communicate with the data sources and process the data and create fingerprints. Refer to the following sections listing the requirements for remote agent installation, distributions, and Kerberos environments respectively.
The following table lists the requirements for installation:
Remote agent setup supports a few distributions that vary in requirements. The following table lists the distribution requirements most suitable for your Data Catalog setup:
|Amazon Elastic Map Reduce (EMR)
NoteWhen prompted by the remote agent script, you set up the remote agent using the Data Catalog service user for Hadoop.
|Cloudera Data Platform (CDP)
See Configure Data Catalog for CDP for configuration information.
Additionally, you can enable Kerberos on your remote agent server. Additional configuration is required since Kerberos-enabled environments add extra security between your remote agent and the Data Catalog cluster.
Kerberos has the following requirements:
- Your Hadoop administrator has created a service user on your environment.
- A Kerberos keytab file has already been set up for your service user on the Kerberos machine.
See Remote agent for more information.