Skip to main content
Hitachi Vantara Lumada and Pentaho Documentation

Manage data sources

Parent article

With Lumada Data Catalog, you can process data from file systems and relational databases. Data sources can contain structured or unstructured data.

Data Catalog supports the following unstructured document types:

  • Adobe PDF (.pdf)
  • Email (.eml without attachments)
  • Microsoft Excel (.xls and .xlsx)
  • Microsoft PowerPoint (.ppt and .pptx)
  • Microsoft Rich Text Format (.rtf)
  • Microsoft Word (.doc and .docx)
  • OpenOffice (.odf and .odg)
  • Text (.txt)

You can scan and profile these unstructured documents to determine document properties, identify sensitive data, and detect the language of the content. To add business terms to unstructured data, see Tag unstructured data.

NoteSince language detection scans only the first page of data, Data Catalog may not be able to detect the language if the first page of a document contains only special characters or other non-word data.

All properties displayed in the Data Canvas are then available for tagging and evaluation using business rules.

You can also use JSON documents in a NoSQL MongoDB as a data source. The following data sources are supported:

  • Azure Data Lake Storage Gen 1, Gen 2
  • AWS S3
  • DB2 11.5.7
  • Denodo 8
  • HCP
  • HDFS
  • Hive 3.1.2
  • Minio (S3)
  • MongoDB 5.0
  • MSSQL 2019
  • MySQL 8
  • Oracle 11g,12,19c
  • PostgreSQL 12.4
  • Server Message Block (SMB) (Windows)

Additionally, you can process data from the following JDBC sources using the Other data source type:

  • Snowflake 3.13
  • Vertica 10.1ce, 11.1ce

To process data from these systems, Data Catalog establishes a data source definition. This data source stores the connection information to your sources of data, including their access URLs and credentials for the service user.

To ignore selected MongoDB databases in scan or schema jobs, use the MongoDB databases to be restricted configuration setting to specify the databases to ignore.

You can connect to an Apache Atlas data source. See Apache Atlas integration.

NoteFor the latest supported versions refer to the release notes.

Adding a data source

If your role has the Manage Data Sources privilege, perform the following steps to create data source definitions.

Specify data source identifiers

Perform the following steps to identify your data source within Data Catalog:

Procedure

  1. Click Go to Management in the Welcome page or Management in the left toolbar of the navigation pane.

    The Manage Your Environment page opens.
  2. Click Data Source then Add Data Source, or Add New then Add Data Source.

    The Create Data Source page opens.
  3. Specify the following basic information for the connection to your data source:

    FieldDescription
    Data Source NameSpecify the name of your data source. This name is used in the Data Catalog interface. It should be something your Data Catalog users recognize.
    NoteNames must start with a letter, and must contain only letters, digits, and underscores. White spaces in names are not supported.
    Description (Optional)Specify a description of your data source.
    AgentSelect the Data Catalog agent that will service your data source. This agent is responsible for triggering and managing profiling jobs in Data Catalog for this data source.
    Data Source TypeSelect the database type of your source. You are then prompted to specify additional connection information based on the file system or database type you are trying to access.
  4. Specify additional connection information based on the file system or database type you are trying to access.

    See the following sections for details:

ADLS data source

You can connect to an instance of Microsoft’s Azure Data Lake Storage (ADLS) system through a shared key, OAuth, and Ooher configuration method. Regardless of the method you choose, specify the following base fields:

FieldDescription
Source PathDirectory where this data source is included. It can be the root of JDBC or it can be a specific high-level directory. To include all databases, use "/".
NoteMake sure the specified user can access the data in the JDBC database. Data Catalog can only process the required data if the user has access to the data within the JDBC data source.
File SystemThe parent location that holds the files and folders
Account NameThe name given to your storage account during creation

If you are using the OAuth 2.0 configuration method, you must also specify the client credentials, such as ClientID, Client Secret, and Client Endpoint.

AWS S3 data source

You can connect to an Amazon Web Services (AWS) Simple Storage Service (S3) bucket with your data source URL containing the Elastic MapReduce (EMR) file system name of the S3 bucket, for example, s3://acme-impressions-data/. Access requirements differ depending on whether you are running Lumada Data Catalog on an EMR instance or on another instance type.

Specify the following additional fields for AWS access:

FieldDescription
Source PathDirectory where this data source is included.
EndpointLocation of the bucket. For example, s3.<region containing S3 bucket>.amazonaws.com
Access KeyUser credential to access data on the bucket.
Secret KeyPassword credential to access data on the bucket.
Bucket NameThe name of the S3 bucket in which the data resides. For S3 access from non-EMR file systems, Data Catalog uses the AWS command line interface to access S3 data. These commands send requests using access keys, which consist of an access key ID and a secret access key. You must specify the logical name for the cluster root. This value is defined by dfs.nameservices in the hdfs-site.xml configuration file. For S3 access from AWS S3 and MapR file systems, you must identify the root of the MapR file system with maprfs:///.
URI SchemeVersion of S3 used for the bucket. You can select either S3 or S3A.
Assume RoleFor S3 access from EMR file systems, the EMR role must include s3:GetObject and s3:ListBucket actions for the bucket. By default, the EMR_DefaultRole includes s3:Get* and s3:List* for all buckets. The bucket must allow access for the EMR role principal to perform at least s3:GetObject and s3:ListBucket actions.
Additional PropertiesAny additional properties needed to connect. The syntax for additional properties is property = value. For S3 access from Kerberos, you must specify the connection URL, the keytab, and principal created for the Data Catalog service user. The Kerberos user name in the Data Catalog configuration, the cluster proxy settings, and the KDC principal are all case-sensitive. Kerberos principal names are case-sensitive, but operating system names can be case-insensitive.
NoteA mismatch can cause problems that are difficult to troubleshoot.

HCP data source

You can add data to Data Catalog from Hitachi Content Platform (HCP) by specifying the following additional fields:

FieldDescription
Source Path Directory where this data source is included.
EndpointLocation of the bucket. (hostname or IP address)
Access KeyThe access key of the S3 credentials to access the bucket.
Secret KeyThe secret key of the S3 credentials to access the bucket.
Bucket NameThe name of the bucket in which the data resides.
URI SchemeThe version of S3 used for the bucket.
Additional PropertiesAny additional properties needed to connect.

HDFS data source

You can add data to Data Catalog from files in Hadoop Distributed File System (HDFS) file systems by specifying the following additional fields:

FieldDescription
Configuration MethodHow to configure the connection. For example, to configure the connection using a URL, select URI.
Source PathA HDFS directory that this data source includes. It can be the root of HDFS, or it can be a specific high-level directory. Enter a directory based on your needs for access control. To indicate the root of the file system, use the slash "/".
URLLocation of the HDFS root. If the cluster is configured for high-availability (HA), this URL may be a variable name without a specific port number, for example, HDFS: hdfs://<name node>:8020. The <name node> address can be a variable name for high availability. Other examples include:
  • s3://<bucket-name>
  • gs://<bucket-name>
  • wasb://<container-name>
  • adl://<data-lake-storage-path>
  • maprfs:///

Hive data source

You can add data to Data Catalog from a Hive database by specifying the following additional fields:

FieldDescription
Configuration MethodHow to configure the connection. For example, to configure the connection using a URL, select URI.
Source PathThe Hive database that this data source includes. It can be the Hive root, or it can be a specific database. Enter a database based on your needs for access control. To indicate the Hive root, use the slash "/". To indicate a specific database, use a slash "/" followed by the database name. For example, /default where default is the name of the Hive database.
URLLocation of the Hive root. For example, jdbc:hive2://localhost:10000.

JDBC data source

You can add a Data Catalog data source connection to the following relational databases using JDBC connectors:

  • MSSQL
  • MySQL
  • Oracle
  • PostgreSQL

Other JDBC sources include:

  • Denodo
  • Snowflake
  • Vertica

Specify the following additional fields:

FieldDescription
Configuration MethodHow to configure the connection. For example, to configure the connection using a URL, select URI.
Source PathDirectory where this data source is included. It can be the root of JDBC or it can be a specific high-level directory. To include all databases, use the slash "/".
NoteMake sure the specified user can access the data in the JDBC database. Data Catalog can only process the required data if the user has access to the data within the JDBC data source.
URLConnection URL of the database. For example, a MYSQL URL would look like jdbc:mysql://localhost:<port_no>/.
Driver NameDriver class for the database type. To connect Data Catalog to a database, you need a driver class of the database. Data Catalog auto-fills the Driver Class field for the type of database selected from the drop-down list.
NoteWhen you select Other JDBC to enter the database type, you must provide the Driver Class and import the corresponding JDBC JARs which will restart the agent being used to run the data source's profiling jobs.
UsernameName of the default user in the database.
PasswordPassword for the default user in the database.
Database NameName of the related database.

After a JDBC data source connection has been successfully created by a Data Catalog service user, any other user must provide their security credentials to connect to and access this JDBC database.

NoteIf you encounter errors such as ClassNotFoundException or NoClassDefFoundError, your JDBC driver is not available on the class path.

MongoDB data source

You can add data to Data Catalog from a MongoDB database by specifying the following additional fields:

FieldDescription
ConfigurationMethodSelectthe configuration method as URI.
Source PathEnter the MongoDB database path. For example, the default database path for MongoDB is /data/db.
URLEnter the MongoDB server URL, for example, mongodb://localhost:27017.
Username and passwordEnter username and password to connect to the MongoDB server.

SMB data source

You can add data to Data Catalog from a network file-sharing protocol Server Message Block (SMB) using HDFS as the data source.

Before you begin

Mount the SMB shared folder to a node within a cluster and install the remote agent in the same cluster. This allows you to mount data as a local file system to the remote agent thereby enabling the creation of data source as HDFS with the local file system path.

The following is an example of creating a mount point using CIFS-utils. You can use any supported tools to manage the CIFS network file systems mounts:

  1. Install the remote agent. For more information, see Remote Agent.
  2. Install CIFS-utils as sudo user with access to the remote agent.
    sudo yum -y install cifs-utils
    NoteTo execute jobs on this data source, add the --master local parameter before execution.
  3. Create a mount point.
    sudo mount -t cifs -o user=example1,password=badpass //samba/public/ /tmp/mnt/
    The mount point is created at the /tmp/mnt/ location identified in the command.

You can add SMB as data source by specifying the following additional fields:

FieldDescription
Data Source NameSpecify the name of your data source. This name is used in the Data Catalog interface. It should be something your Data Catalog users recognize.
NoteNames must start with a letter, and must contain only letters, digits, and underscores. White spaces in names are not supported.
DescriptionSpecify a description of your data source.
AgentSelect the Data Catalog agent that will service your data source. This agent is responsible for triggering and managing profiling jobs in Data Catalog for this data source.
Data Source TypeSelect the data source type as HDFS. You are then prompted to specify additional connection information.
Configuration MethodSelect the configuration method as URI.
Source PathSpecify the path where the mount point is created. For example, /tmp/mnt/sample.
URLSpecify the location of the HDFS root. For example, file///

Test and add your data source

After you have specified the detailed information according to your data source type, test the connection to the data source and add the data source.
NoteEvery time you add a data source, Data Catalog automatically creates its corresponding root virtual folder in the repository.

Procedure

  1. Click Test Connection to test your connection to the specified data source.

    If you are testing a MySQL connector and you get the following error, it means you need a more recent MySQL connector library:
    java.sql.SQLException: Client does not support authentication protocol requested by server. plugin type was = 'caching_sha2_password'

    1. Go to MySQL :: Download Connector/J and select option "Platform Independent".
    2. Download the compressed (.zip) file and copy to /opt/ldc/agent/ext where /opt/ldc/agent is your agent install directory, and unpack the file.
  2. (Optional) Enter a Note for any information you need to share with others who might access this data source.

  3. Click Create Data Source to establish your data source connection.

Next steps

You can also update the settings for existing data sources, create virtual folders, update existing virtual folders for data sources, and delete data sources.

Add an external data source

Through an external data source, you can integrate Apache® Atlas with Data Catalog. For a given resource in Data Catalog, you can use this integration to perform the following actions:
  • Push business terms to Atlas.
  • Pull lineage information from Atlas.

If your role has the Manage Data Sources privilege, perform the following steps to create an external data source for Apache Atlas:

Procedure

  1. Click Management in the left toolbar of the navigation pane.

    The Manage Your Environment page opens.
  2. Click Add New in the Data Sources card then Add External Data Source.

    The Create External Data Source page opens.
  3. Specify the following information for the connection to your external data source:

    FieldDescription
    External Data Source NameSpecify the name of your data source. This name is used in Data Catalog, so it should be something your Data Catalog users recognize.
    NoteNames must start with a letter, and must contain only letters, digits, and underscores. White spaces in names are not supported.
    DescriptionSpecify a description of your data source.
    External Data Source TypeSelect Atlas to establish a connection with your data source and the Atlas service.
    URLConnection URL for the Atlas service. This URL should include the host name and port for the Atlas service.
    Atlas UsernameName of the Atlas user with the applicable permissions to perform the import and export operations.
    Atlas PasswordPassword for the Atlas user.
    Atlas Cluster NameName of the cluster containing Atlas.
  4. Click Test Connection to test your connection to the specified data source.

  5. (Optional) Enter a Note for any information you need to share with others who might access this data source.

  6. Click Create Data Source to establish your data source connection.

Results

The data source is created and the count of external data sources is incremented on the Data Sources card.

Edit a data source

You can edit a data source as needed.

Two data sources can have overlapping source paths such that the same Source Path and URL are used, but they should have different names. For example, if a data source ds1 has the path "/" and the URLhdfs://aa:2000, you can create another data source with the same path and URL, named ds2.

NoteSome details about an existing data source, such as source path and data source type, cannot be changed and are unavailable for editing.

Perform the following steps to edit a data source:

Procedure

  1. Navigate to Management and click Data Sources.

  2. Locate the data source that you want to edit and then click the View Details (>) icon at the right end of the row for the data source.

    The Data source page opens.
  3. Edit the fields, then click Test Connection to verify your connection to the specified data source.

  4. Click Save Data Source.

Remove a data source

You remove a data source by removing its related root virtual folder. A data source in Data Catalog holds the connection information in an external database or HDFS system, while a virtual folder is the logical mapping of the connection. Removing the root virtual folder of a data source deletes all the dependencies including, but not limited to, deleting any related virtual folder representations and children, asset associations in job templates, and term associations.

Perform the following steps to remove the root virtual folder of a data source:

Procedure

  1. Navigate to Management, then click Data Sources.

  2. Locate the data source that you want to remove, then click the View Details (>) icon at the right end of the row for the data source.

    The Data source page opens.
  3. Click Remove Data Source.

    The Delete dialog box opens. The Delete dialog box lists the detected dependencies of the data source.
  4. Review the dependencies, enter the name of the data source to be removed, and click Confirm.

Results

A confirmation message appears after the data source is removed.
CautionAllow time between removing a data source and the actual removal of all dependencies. This time depends on the data source size and the number of dependencies. Plan carefully and allow time when reusing names of removed data sources. If you do reuse the name of a recently removed data source for a new data source, an error may occur, especially if the size of the removed data source is large. Removing the data source continues as a background job. Allow time for updating Data Catalog documents. If you encounter this situation, try again later.