Skip to main content
Hitachi Vantara Lumada and Pentaho Documentation

Installing and configuring Lumada DataOps Suite

Parent article

To install the Lumada DataOps Suite (LDOS), ensure that your Hitachi Vantara Customer Success representative has arranged for the inclusion of all the system requirements and prerequisites on your system. See Install for system requirement and prerequisite details.

After you have your requirements and prerequisites in place, perform the following tasks to install and configure LDOS:

  1. Configure the properties file
  2. Run the install script
  3. Configuring DNS entries
  4. Adding licenses
  5. Test the installation

Configure the properties file

For proper performance, you must associate environment settings for LDOS with your cluster.

Perform the following steps to configure LDOS environment settings for your cluster:

Procedure

  1. Navigate to the installer directory in your LDOS package, which was unpacked when the LDOS solutions were uploaded.

  2. Open the env.properties file with any text editor.

    All these properties are case-sensitive.
    NoteThe env.properties file also includes other properties that control the installation. For example, you can partially install LDOS by changing the install_mode between LDOS, Lumada Data Integration (LDI), and Lumada Data Catalog (LDC). For more advanced settings, see the README.md file included in the installer directory of the LDOS package.
  3. Specify the following cluster properties:

    Cluster propertyDescriptionExample
    hostnameWhere the instance of the Kubernetes container is running.dogfood.trylumada.com
    registryWhere the Docker images are stored.registry.dogfood.trylumada.com
    namespaceName of the cluster namespace if it is different than the default value.hitachi-solutions
    realmRealm used by Keycloak if it is different than the default value.default
  4. Perform the following steps to obtain the credentials for the Kubernetes container:

    Note Contact your Hitachi Vantara Customer Success representative if you need further information about the container.
    1. Either retrieve the value for the foundry_client_secret credentials from Keycloak or use the echo command from the command line on the container’s client name.

      For example, if you are using the default client name of solution-control-plane-sso-client, you would run the following code from the command line:

      # get client secret for solution-control-plane-sso-client

      echo $(kubectl get secrets/keycloak-client-secret-solution-control-plane-sso-client -n hitachi-solutions --template={{.data.CLIENT_SECRET}} | base64 -- decode)

    2. Use the echo command on keycloakusers to retrieve the value of the password for the user of the Kubernetes container.

      For example, if you are using the default user of foundry, you would run the following code from the command line:

      # get password for foundry user:

      echo $(kubectl get keycloakusers -n hitachi-solutions keycloak-user -o jsonpath='{.spec.user.credentials[0].value}')

  5. Specify the following credentials for the Kubernetes container:

    Kubernetes credentialDescription
    foundry_client_nameClient ID of the Kubernetes container in Keycloak if different than the default solution-control-plane-sso-client value.
    foundry_client_secretClient secret in Keycloak that you obtained in the previous step.
    usernameUsername of the account with administrative permissions to the Kubernetes container (foundry for example).
    passwordPassword of the account with administrative permissions to the Kubernetes container that you obtained in the previous step.
  6. Specify the following Network File System (NFS) volume properties:

    NFS volume propertyDescriptionExample
    volume_hostnameNFS server host.my-nfs-server.example.com
    volume_pathPath for the volume root directory in the NFS server./ldos-volume

    Lumada DataOps Suite must point to a NFS server to store files for the Data Transformation Editor, Dataflow Importer, and Dataflow Engine solutions. See Administer for more information on these solutions.

  7. Save and close the env.properties file.

Next steps

You are now ready to run the install script.

Run the install script

Executing the install.sh file installs LDOS and applies the configuration properties you set for your cluster. The script also sets up the following default roles and users:
Default roleSample username*Description
AdministratorcmooreFull access to LDOS, including Solution management and Keycloak.
Data EngineerbwayneAccess to dataflow operations, including to the Data Transformation Editor, and access to Data Catalog as an Analyst.
Data StewardmpaytonLimited access to dataflow operations, and access to Data Catalog as a Steward.
AnalystcparkerLimited access to dataflow operations, and access to Data Catalog as an Analyst.
GuestjdoeView-only access.
*For each role provided in the table above, the sample username is also the password.

Perform the following steps to run the install script:

Procedure

  1. Navigate to the installer directory in your LDOS package, which was unpacked when the LDOS solutions were uploaded.

  2. Run the following script from the command line:

    ./install.sh

Results

The script creates and adds the default roles and users, and installs the LDOS solutions. See Administer for more information on these solutions.

Configuring DNS entries

You need to establish Domain Name System (DNS) aliases for Lumada Data Catalog and LDOS Data Transformation Editor to the cluster to identify these solutions. Create the following DNS aliases to your cluster, replacing <HOSTNAME> with the hostname of your cluster:

  • catalog-<HOSTNAME>
  • dte-<HOSTNAME>

Adding licenses

Before you begin, contact your Hitachi Vantara Customer Success representative for PDI and Lumada Data Catalog (LDC) licenses. The LDOS package does not include these licenses.

Apply PDI license

With the PDI license, you can execute PDI transformations and jobs using the Dataflow Engine and edit the transformations and jobs with the Data Transformation Editor.

Perform the following steps to apply a PDI license:

Procedure

  1. Contact your Hitachi Vantara Customer Success representative for the PDI license file.

  2. Make sure the license file is named .installedLicenses.xml.

  3. Upload the .installedLicenses.xml file to the NFS volume under the <volume_path>/licenses directory.

Results

After you have applied the PDI license, you can now execute and edit your dataflows.

Apply Lumada Data Catalog license

Lumada DataOps Suite ships with a limited version of Lumada Data Catalog (LDC).

Perform the following steps to turn on all the features in LDC:

Procedure

  1. Contact your Hitachi Vantara Customer Success representative for a full LDC version license.

  2. Run the following code from the command line to upgrade the license to the full version:

    kubectl create secret generic ldc-license --from-file=license-features.yaml --from-file=ldc-license-public-keystore.p12 -n hitachi-solutions

Results

After you have applied the full version license, you can now access all the features in LDC.

Test the installation

After you have installed, configured, and updated licenses, you can test the setup by logging into LDOS.

Perform the following steps to test your installation:

Procedure

  1. Enter the following address into a browser with <HOSTNAME> replaced by the hostname of your cluster:

    https://<HOSTNAME>/hitachi-solutions/control-plane/control-plane-lcp-app/

    The browser opens the LDOS Solution Login page.

  2. Log in with any of the default users listed in Run the install script.

Results

The Lumada DataOps Suite home page opens. See Tour the Lumada DataOps Suite for further instructions.