Skip to main content
Hitachi Vantara Lumada and Pentaho Documentation

Install and configure IIoT Core Services

This chapter describes important preparatory steps before installing IIoT Core Services, the installation process itself, and the necessary post-installation tasks.

Preparing for IIoT Core Services installation

Before you begin installing IIoT Core Services, you should review all of the information in this chapter, gather the required information, and complete any necessary installations.

Installation node requirements

As a best practice, install IIoT Core Services from a small VM node outside the cluster but within the same network environment.

The following minimum requirements apply to the VM installation node.

Hardware Specifications
CPUIntel Atom or equivalent processor, 4 cores
Memory 16 GB
Disk space200 GB

IIoT Core Services supports the following operating systems on the VM installation node:

Software Version
Red Hat Enterprise Linux (RHEL) 8.4

IIoT Core Services system requirements

IIoT Core Services is designed to be installed in a cluster with a minimum of three nodes.

NoteTo run Machine learning service, five nodes are recommended.

The following table lists the minimum requirements for each of the cluster nodes (without Machine learning service):

Hardware Specifications
Number of nodes3

CPU

16 vCore CPU per node

Example: 2 Intel Xeon Scalable E5-2600 v5 or equivalent AMD processors, 64-bit, 8 cores

Memory 16 GB per node
Disk space512 GB per node

For IIoT Core with Machine learning service, the following minimum requirements apply:

Hardware Specifications
Number of nodes5

CPU

32 vCore CPU per node

Example: 2 Intel Xeon Silver 4110 CPU 8 cores @2.10 Ghz,16 threads, or high performance CPU

Memory 128 GB per node
Disk space

2 TB per node

Minimum PVC size for the whole cluster: 4 TB

IIoT Core Services supports the following operating system for cluster nodes:

Software Version
Red Hat Enterprise Linux (RHEL) 8.4

Installation prerequisites

Observe the following prerequisites before installing IIoT Core Services. These are global prerequisites that apply to all IIoT Core Services components, including the platform component.

IIoT Core Services prerequisites

To install IIoT Core Services, complete the following prerequisites.

High-level prerequisites
  • Restart the nodes before installing Kubernetes. See the Kubernetes documentation for more information.
  • Install and configure Kubernetes in a three-node cluster (five nodes recommended for Machine learning service).
  • During installation, set all nodes as master nodes for redundancy in case a master node fails.
  • Use the command option -n hiota when installing the IIoT Core Services platform components. The platform components must be installed in the same hiota namespace where IIoT Core Services is installed.
  • You must have a FQDN for the cluster.
Installation VM prerequisites

The following software must be installed on the installation VM that is used when installing IIoT Core Services:

SoftwareVersion
Python3.6.8
Kubectlv1.23
OpenSSLN/A
Helm3.6.3
Docker20.10.21
Component prerequisites

Observe the following specific prerequisites before installing the IIoT Core Services components.

ComponentRequirement
Kubernetes

A secured Kubernetes system (v1.23) with a kubeconfig file for API server access.

In addition to installing Kubernetes, optionally install and configure a Kubernetes dashboard.

Default storage class

To maximize solution portability, the Kubernetes system must declare a default storage class for dynamic provision of storage resources.

To verify that your Kubernetes cluster declares a default storage class (or you enable one), follow the instructions in the Kubernetes documentation on changing the default StorageClass.

Storage plugin

Install a storage plugin with the following specifications:

Googles Kubernetes Engine (GKE)

Storage class: GKE standard

Follow the instructions for creating a Kubernetes cluster using GKE in the Google documentation on the Kubernetes engine.

On-premises

Based on what best suits your hardware environment, choose one of the following options. Both options have been tested with IIoT Core Services 5.1.0:

Load balancer

Set up a load balancer to forward requests to the Kubernetes cluster node for the following ports:

  • 30084
  • 30086
  • 30090
  • 30091
  • 30092
  • 30223
  • 30224
  • 30228
  • 30303
  • 30443
  • 30671
  • 30884
  • 30888
  • 30998
  • 30999
  • 31000
  • 31671
  • 32500
Registry requirementsSee Registry requirements and Example of how to set up a Docker registry.
nfs-utils

Only applies to on-premises installations of IIoT Core Services when ML Services is selected as an optional installation:

Either install a nfs-utils package on all nodes for NFS server installation or use an external NFS server. Different OS versions require a different version of nfs-utils.

DatabasesThe following database versions are supported with the current version of IIoT Core:
  • Couch: 2.3.1
  • InfluxDB: 2.1.1
  • Kafka: 2.13-2.3.0
  • Knative-eventing: v0.18.3
  • Knative-serving: v0.18.1
  • MinIO: RELEASE.2021-04-06T23-11-00Z.hotfix.e46577116
  • Postgres
KafkaYou must have a wildcard DNS record configured for accessing kafka-cluster-0-external.kafka.FQDN and kafka-cluster-1-external.kafka.FQDN.

Registry requirements

IIoT Core Services requires a registry for container images that is OCI-compliant and has an SSL certificate.

Using FQDN from deployment

When deploying both cluster services and the control plane, specify the fully qualified domain name (FQDN) of this registry, either using the -r argument or the installer configuration file. The value you specify needs to include both the host and port for your registry. For example:

-r myregistry.example.com:6000

If your registry is available on port 443, you don't need to specify the port number.

Multitenancy

If you are using a registry that supports multitenancy, you also need to include the specific location within the registry that you want to use. For example, if you are using Harbor, include the name of the Harbor project you want to use:

-r myharbor.example.com:6000/my_project
Using an insecure registry

If the registry you are using is insecure (that is, it has a self-signed or otherwise not trusted SSL certificate), you must configure your Docker daemon on the installation node to allow the insecure registry.

This configuration is often done by adding the registry to the insecure-registries section of /etc/docker/daemon.json, and restarting the Docker service. Configure the container runtime on the cluster to allow the insecure registry. Specify the -I flag for install-control-plane.sh and for install-cluster-services.sh.

As a best practice, use a trusted, CA-signed certificate.

For information on setting up a non-production registry that meets requirements, see the following example.

Example of how to set up a Docker registry

This section walks you through the process of setting up a non-production, insecure registry that IIoT Core Services can use with Docker Registry.

A non-production (development-only) environment requires an OCI-compliant registry that uses HTTPS. To authenticate with the registry, use a username and password, not an auth plugin or credential helper.

ImportantThis registry configuration is not for production purposes. For production use, configure a registry with a trusted certificate and enforced authentication. As a best practice, configure the registry to be read-only for runtime use.

Before you begin

The following must be set up before running the procedure:

  • Docker
  • OpenSSL command line

Procedure

  1. Generate a self-signed OpenSSL certificate.

    mkdir -p certs
    openssl req \
      -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
      -x509 -days 365 -out certs/domain.crt -subj "/CN=$(hostname -f)"
  2. Start the Docker registry on port 5000 by passing a self-signed Open SSL certificate to the Docker registry.

    docker run -d -v "$(pwd)"/certs:/certs \
      -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \
      -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
      -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
      -p 5000:5000 \
      --name registry \
      --restart unless-stopped \
      registry:2
    NoteFor additional options, see https://docs.docker.com/registry/deploying/.
  3. Ensure that the registry is included in the list of insecure registries in your container runtime.

    Make sure to include <registry_hostname>:5000 in insecure-registries in /etc/docker/daemon.json.
    1. Create a /etc/docker/daemon.json file, if you don't already have one.

    2. In this file, add <registry_hostname>:5000 to a list of insecure registries:

      {  
          "insecure-registries" : [
              "<registry_hostname>:5000"
          ]  
      }
    3. Restart Docker for the configuration changes to take effect:

      systemctl restart docker
    4. Run docker info and verify that the list of Insecure Registries is correct.

      For example:
      Client:
       Debug Mode: false
      Server:
       Containers: 150
        Running: 67
        Paused: 0
        Stopped: 83
       Images: 217
       Server Version: 19.03.5-ce
       ...
       Insecure Registries:
        <registry_hostname>:5000
        127.0.0.0/8
       Live Restore Enabled: false
    For more information on daemon.json, see the official Docker documentation.
  4. Test the registry by pulling, tagging, and pushing an image:

    docker pull ubuntu
    docker image tag ubuntu $(hostname -f):5000/my-ubuntu
    docker push $(hostname -f):5000/my-ubuntu

    If you see output similar to the following, your registry is working correctly:

    Using default tag: latest
    latest: Pulling from library/ubuntu
    423ae2b273f4: Pull complete 
    de83a2304fa1: Pull complete 
    f9a83bce3af0: Pull complete 
    b6b53be908de: Pull complete 
    Digest: sha256:04d48df82c938587820d7b6006f5071dbbffceb7ca01d2814f81857c631d44df
    Status: Downloaded newer image for ubuntu:latest
    docker.io/library/ubuntu:latest
    The push refers to repository [<registry_hostname>:5000/my-ubuntu]
    1852b2300972: Pushed 
    03c9b9f537a4: Pushed 
    8c98131d2d1d: Pushed 
    cc4590d6a718: Pushed 
    latest: digest: sha256:0925d086715714114c1988f7c947db94064fd385e171a63c07730f1fa014e6f9 size: 1152

    You can also list the contents of the registry using the following commands:

    $ curl https://$(hostname -f):5000/v2/_catalog -k
    
    {"repositories":["my-ubuntu"]}

Results

The registry is now set up and ready for use.

Next steps

To remove this registry, run the following command:

docker stop registry; 
docker rm registry

Port configuration requirements (Core Services)

To use IIoT Core Services, you must provide access to the ports used by the system services and databases.

Message brokers, databases, and external-facing services

IIoT Core Services uses a combination of message brokers and RESTful services. Message brokers establish communication between applications and infrastructure for message queues and topics.

Check the following message brokers, databases, and other services to determine if the corresponding ports need to be open to run your IIoT Core Services. The needed ports must be open both on the load balancer and on each node.

Message brokers
ServiceDescriptionDefault portOptional installDefault loginLinks
AMQP - RabbitMQMessaging over AMQP30671Noadmin

Documentation: https://www.rabbitmq.com/documentation.html

Databases
ServiceDescriptionDefault portOptional installDefault loginLinks
CouchDBUnstructured data database access30084Noadmin

Documentation: http://docs.couchdb.org/en/stable/

UI: https://<IP-address>:30084

OR

https://<load-balancer-address>:30084

OR

https://<cluster_fqdn>:30084

MongoDBUnstructured data database access30017Yesadmin

Documentation: https://www.mongodb.com/docs/

InfluxDBTime-series data (historical data) database access30086Noadmin

Documentation: https://docs.influxdata.com/influxdb

MinIOObject storage database access31000Noadmin

Documentation: https://docs.min.io/docs/

UI: https://<IP-address>:31000/minio/login

OR

https://<load-balancer-address>:31000/minio/login

OR

https://<cluster_fqdn>:31000/minio/login

Services
ServiceDescriptionDefault portOptional installDefault loginLinks
Hiota Ingress RESTHTTPS data ingestion to IIoT Core30443NoN/A

N/A

Hiota Passport

CouchDB

Passport API access to Couch data

30224NoN/AN/A

InfluxDB

Time-series data (historical data) access

30223NoN/AN/A

PostgreSQL

Structured data access

30228NoN/AN/A
Hiota Product APIsAccess to management plane APIs30443NoN/ADocumentation: See Management plane REST API
OAuth-HelperSimple OAuth handling30303NoN/A

N/A

Internal core services

Because the following ports are used by internal IIoT Core Services applications, verify that these ports are open to external access for the assigned IIoT Core Services to work properly.

Message brokers
ServiceDescriptionDefault portLinks
KafkaKafka messaging support30091, 30092

Documentation: https://kafka.apache.org/intro

RabbitMQ (https-UI)UI for troubleshooting31671

Documentation: https://www.rabbitmq.com/documentation.html

MQTT - RabbitMQMessaging over MQTT for gateway devices30884

Documentation: https://www.rabbitmq.com/documentation.html

Databases
ServiceDescriptionDefault portDefault loginLinks
ArangoDBArangoDB multi-model database system30529admin

Documentation: https://www.arangodb.com/documentation/

CouchDB (https-UI)UI for troubleshooting30984admin

Documentation: http://docs.couchdb.org/en/stable/

UI: https://<IP-address>:30984/_utils/#login

OR

https://<load-balancer-address>:30984/_utils/#login

OR

https://<cluster_fqdn>:30984/_utils/#login

Services
ServiceDescriptionDefault portLinks
Docker Trusted RegistryPrivate Docker trusted registry that stores and manages Docker images for gateway services or user applications that run on gateways32500

Documentation: https://docs.docker.com/ee/dtr/

Hiota Alert ManagerEnables alert management 30443N/A
Hiota AssetEnables asset and gateway management 30443N/A
Hiota Kube ResourceManagement wrapper API for Kubernetes resources for activities such as deploying software and configurations to gateways30443N/A
Hiota Manager (gRPC server)gRCP server for internal connections30999N/A
Hiota Manager (REST server)REST server for hiota-agent30998N/A
Hiota OI ManagerOpen Image Manager enables upload of software on the user interface and provides statuses.30800N/A
OAuth-HelperSimple OAuth handling30303N/A
Hiota RegistryAccess to core and gateway route endpoints and statuses as well as core service configurations30443N/A
Hiota User PreferencesUser preferences for notifications30231N/A

Useful commands for installation node VM

When installing IIoT Core Services on GKE using the recommended separate installation node, the following commands are helpful:

When to useCommands
Run these commands from the terminal
  • gcloud auth login --no-launch-browser
  • gcloud config set project <project-name>
  • gcloud auth print-access-token
Commands to access the VM instance
  • gcloud compute ssh --zone "us-central1-c" ”<test-cluster-vm-name>" --tunnel-through-iap --project "<project-name>"
  • gcloud container clusters get-credentials <cluster-name> --zone us-central1-c
  • kubectl config get-contexts
Commands to copy files to and from Google storagegsutil is a Python application that lets you access Google Cloud Storage from the command line. For example, you can use gsutil for:
  • creating and deleting buckets
  • uploading, downloading, and deleting objects
  • listing buckets and objects
  • moving, copying, and renaming objects

Useful gsutil commands:

  • gsutil ls -al gs://hiota-install/
  • gsutil cp lei-installer-release-<release_number: x.y.z>.tgz  gs://hiota-install/lei-<release_number: x.y>/
  • gsutil cp gs://hiota-install/gcr-token.json gcr-token.json

Machine learning service resource requirements

Machine learning service is a range of services that offer machine learning tools as part of cloud computing services.

You can activate Machine learning service during the IIoT Core Services installation process to get started with machine learning.

For all cluster nodes, if you elect to enable Machine learning service during the IIoT Core Services installation process, note the following resource requirements:

RequirementsSpecifications
Minimum memory and processor requirements
  • 16 vCPU
  • 48 GB RAM
Disk space requirements
  • 10 GB storage for each user on Jupyter
  • 1 GB for JupyterHub
  • 2 GB for Seldon Core Analytics Prometheus Alert Manager
  • 15 GB for MLflow database

Installation worksheets

Use the following tools to assist you in the installation process.

Installation checklist

Use this installation checklist prior to the installation of the platform components and IIOT core services.

CategoryComponentStatus

Installer machine or VM

OS: RHEL 8.4

Docker: v20.10.12

Helm: v3.6.3

Kubernetes: v1.23.9

Yes/No

Kubernetes nodes (on premises)

Number of nodes: 3 or more.

OS: RHEL 8.4

Enable Multipathd (If HSPC is used as a storage plugin)

Disable firewall

Yes/No

A valid FQDN

N/AYes/No

A load balancer

N/AYes/No

A private or public Docker Registry (Core-DTR) to check that the Docker login from the installer VM, the URL, and credentials work.

N/AYes/No

Access the IIoT Core Services software

To download the IIoT Core Services software, go to https://support.pentaho.com and log in. The software includes the following TAR files:

  • IIoT Core Services platform installation package.
  • IIoT Core Services main installer script.
  • (Optional) Modbus. Install the Modbus protocol after the core installation is complete.
  • IIoT Core Services Docker images.
  • (Optional) Machine learning service Docker images.
  • (Optional) Digital Twin Beta Docker images.
  • (Optional) Command Line Interface (CLI) application.

Setting up the Kubernetes cluster

IIoT Core Services can be deployed on different types of Kubernetes clusters:

IIoT Core comes with a Docker Trusted Registry (DTR) that is used to store Docker images of gateway services and user applications. See information about the Docker Trusted Registry service in Internal core services.

NoteThis DTR is different from the IIoT Core DTR or Google Container Registry (GCR) that is used to store the Docker images for IIoT Core Services.

For memory and disk space requirements for the installation node, see Installation node requirements.

Configure a GKE cluster with user hosted GCR

You can install a Google Kubernetes Engine (GKE) cluster by creating your own hosted Google Container Registry (GCR).

As a best practice, use an installer VM that is outside the cluster but can access the cluster nodes.

To log in to the GCR repository, use the JSON file reference to get the token:

cat <read-write-json-token> | docker login -u _json_key --password-stdin <gcr-url>
cat <read-write-json-token> | HELM_EXPERIMENTAL_OCI=1 helm registry login <gcr-url> -u _json_key --password-stdin

Procedure

  1. Download the following tarballs to a node that can connect to the hosted DTR and has Docker installed:

    • iiot-docker-images-release-5.1.0.tgz
    • iiot-installer-release-5.1.0.tgz
    • (Optional) mlservice-docker-images-1.2.0.tgz
    • (Optional) aaf-docker-images-5.1.0.tgz (Digital Twin Beta)
    On this node, the partition with /var/lib/docker (Docker default local repository) should have at least 120 GB free space. You can check how much space is available using the command df -h /var/lib/docker. The partition is needed later for loading and pushing Docker images to the registry.
  2. Open the core software package by executing the following command:

    tar xvf iiot-installer-release-5.1.0.tgz
    A new directory is created: iiot-installer-release-5.1.0
  3. Open the Docker images TAR file:

    tar xvf iiot-docker-images-5.1.0.tgz
  4. (Optional) Open the Machine learning service images TAR file and add og move the untarred folder to iiot-installer-release-5.1.0/mlaas/images:

    tar xvf mlservice-docker-images-5.1.0.tgz
    mv mlaas/images iiot-installer-release-5.1.0/mlaas/
  5. (Optional) Open the Digital Twin Beta docker image file and add og move the untarred folder to iiot-installer-release-5.1.0/aaf/images:

    tar xvf aaf-docker-images-5.1.0.tgz
    mv aaf/images iiot-installer-release-5.1.0/aaf/
  6. Push the hiota-solutions Helm chart and corresponding solution package to the IIoT Core Services DTR.

    This push only needs to be done once before starting the installation process so that the Solution Control Plane can manage hiota-solutions. Otherwise, hiota-solutions will not appear on the control plane user interface even though the IIoT Core Services is running.
    export HELM_EXPERIMENTAL_OCI=1
    helm chart save <path to IIoT Core Services 5.1.0 image>/iiot-installer-release-5.0.0/Module-4/hiota-solutions-5.0.0.tgz <core-dtr url>/hiota-solutions:5.0.0
    helm registry login <core-dtr url> -u <username> -p <password>
    helm chart push <core-dtr url>/hiota-solutions:5.1.0
    kubectl apply -f <path to IIoT Core Services 5.1.0 image>/iiot-installer-release-5.0.0/Module-4/roles/core-services/install/files/hiota_solution_package.yaml
  7. Obtain the hosted GCR login information (json_key file with read-write permission, for example).

  8. Tag the Docker images and push them to the hosted registry. This push needs to be done only once.

    The script can also be found in the IIoT Core Services installer script image folder: iiot-installer-release-5.1.0/tag-push-docker-images.sh.
  9. Create the GKE three-node cluster.

    NoteTo run Machine learning service, five nodes are recommended.
  10. Create the installer VM on GCP with Python 3, Helm 3.6.3, and kubectl installed.

    Check if you can connect to the Kubernetes cluster with the kubectl command. The rest of the IIoT Core Services installation process is performed on the installer VM.

Results

Your cluster is now prepared to install IIoT Core Services using your own hosted GCR. To proceed with the installation of IIoT Core Services, go to Installing IIoT Core Services.

Configure a Kubernetes cluster on premises

You can run IIoT Core Services on your own Kubernetes cluster using your own Docker Trusted Registry (DTR).

As a best practice, use an installer VM that is outside the cluster but can still access the cluster nodes.

Procedure

  1. Download the following tarballs to a node that can connect to the hosted DTR and has Docker installed.

    • iiot-docker-images-release-5.1.0.tgz
    • iiot-installer-release-5.1.0.tgz
    • (Optional) mlservice-docker-images-1.2.0.tgz
    • (Optional) aaf-docker-images-5.1.0.tgz (Digital Twin Beta)
    On this node, the partition with /var/lib/docker (Docker default local repository) should have at least 120 GB free space. You can check how much space is available using the command df -h /var/lib/docker. The partition is needed later for loading and pushing Docker images to the registry.
  2. Open the core software package by executing the following command:

    tar xvf iiot-installer-release-5.1.0.tgz
    A new directory is created: iiot-installer-release-5.1.0
  3. Open the Docker images TAR file:

    tar xvf iiot-docker-images-5.1.0.tgz
  4. (Optional) Open the Machine learning service images TAR file and add og move the untarred folder to iiot-installer-release-5.1.0/mlaas/images:

    tar xvf mlservice-docker-images-1.2.0.tgz
    mv mlaas/images iiot-installer-release-5.1.0/mlaas/
  5. (Optional) Open the Digital Twin Beta Docker image file and add og move the untarred folder to iiot-installer-release-5.1.0/aaf/images:

    tar xvf aaf-docker-images-5.1.0.tgz
    mv aaf/images iiot-installer-release-5.1.0/aaf/
  6. Obtain the hosted core DTR login information, such as username and password.

  7. Tag the Docker images and push them to the hosted registry. This push needs to be done only once.

    The script can also be found in the IIoT Core Services installer script image folder iiot-installer-release-5.1.0/tag-push-docker-images.sh.
  8. Create an on-prem three-node Kubernetes cluster with a storage plugin.

    NoteTo run Machine learning service, five nodes are recommended.
  9. Verify that Python 3, Helm 3.6.3, and kubectl are installed on the installation node.

Results

Your cluster is now prepared to install IIoT Core Services. To proceed with the installation ofIIoT Core Services, go to Installing IIoT Core Services.

Installing IIoT Core Services

After the software prerequisites are met and your hosted environment is set up, you can install IIoT Core Services.

Install the platform components

This section describes how to install the IIoT Core Services platform components, also referred to as Foundry, for the purpose of installing IIoT Core Services.

Before you begin

If you are reinstalling the platform components, first uninstall any current instance of the software on your system to conserve system resources. See Uninstall the platform components.

NoteBefore uninstalling the platform components, make sure to first uninstall IIoT Core Services. See Uninstall IIoT Core Services.

Procedure

  1. Log in to the installation node.

  2. Download the platform installation package to the installation node:

    Foundry-Control-Plane-2.4.1.tgz
  3. Open the platform installation package by executing the following command:

    mkdir Foundry-Control-Plane-2.4.1
    tar xvf Foundry-Control-Plane-2.4.1.tgz -C ./Foundry-Control-Plane-2.4.1
    NoteYou must perform the installation from a directory that is at least two levels from the root level, as shown above.
  4. Navigate to the platform software directory:

    cd Foundry-Control-Plane-2.4.1
  5. Get an access token:

    gcloud auth print-access-token
  6. To install Keycloak, manually create a file called foundry-control-plane-values.yaml, which is used with the control plane installation command, with the following contents:

    keycloakoperator:
      publicPath: /auth
    configuration:
      keycloak:
        enableDockerAuthentication: true
        instances: 3
    logging:
      enabled: false
  7. If you have credentials for the IIoT Core Services Docker Trusted Registry (DTR), try logging into the DTR from the cluster nodes to verify access.

  8. Navigate to the bin directory:

    cd foundry-control-plane-2.4.1/bin
    NoteYou must perform the installation from a directory that is at least two levels from the root level, as in /root/foundry-control-plane-2.4.1/bin
  9. Install the platform cluster service using the following commands:

    For GKE:

    ./install-cluster-services.sh -r <core-dtr-url> -w service-type=NodePort -u oauth2accesstoken -p $(gcloud auth print-access-token)

    For on-premises:

    ./install-cluster-services.sh -r <core-dtr-url> -w service-type=NodePort -u <username> -p <password>
  10. Set the number of replicas of istiod, istio-ingressgateway, and istio-egressgateway you want to run by scaling as follows:

    Example using three replicas:
    kubectl scale deployment -n istio-system --replicas=3 istiod
    kubectl scale deployment -n istio-system --replicas=3 istio-ingressgateway
    kubectl scale deployment -n istio-system --replicas=3 istio-egressgateway
  11. Apply Kubernetes custom resource definitions.

    For GKE:

    cd Foundry-Control-Plane/bin/
    ./apply-crds.sh -r <gcr-url> -e -u oauth2accesstoken -p $(gcloud auth print-access-token)

    For on-premises:

    cd Foundry-Control-Plane/bin/
    ./apply-crds.sh -r <core-dtr-fqdn>:[<port>] -e -u <username> -p <password> 
    --insecure
  12. Install the control plane.

    For GKE:

    cd Foundry-Control-Plane/bin/
    ./install-control-plane.sh -r <gcr-url> -c https://<cluster-fqdn>:30443 -n hiota -u oauth2accesstoken -p $(gcloud auth print-access-token) -v
    <path-to>/foundry-control-plane-values.yaml --skip_cluster_url_check

    For on-premises:

    cd Foundry-Control-Plane/bin/
    ./install-control-plane.sh -I -r <core-dtr-url> -c https://<cluster-fqdn>:30443 -n hiota -u <username> -p <password> -v
    <path-to>/foundry-control-plane-values.yaml

Results

To verify that the platform components have been properly installed, go to Verify IIoT Core platform installation.

Verify IIoT Core platform installation

Log in to the Solution Management UI, an administrative console for the installation, and check the platform software version.

Procedure

  1. From the command line on the installation node, get the username and password for the Solution Management UI:

    • Username:
      echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.username}')
    • Password:
      echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.credentials[0].value}')
  2. Log in to the Solution Management UI using the acquired credentials:

    https://<cluster-fqdn>:30443/hiota/hscp-hiota/solution-control-plane/
    where <cluster-fqdn> is the location where IIoT Core Services is installed.
  3. Navigate to Configuration Registry and replace the JSON password as follows:

    1. Get the JSON token and save it to the file gcr-token.json.
    2. Insert _json_key in the Username field on the Registry tab.
    3. Open the gcr-token.json file to obtain the token.
    4. Copy the token to the Password field on the Registry tab by pasting the password in a single line, and click to save.
    5. Click the Solutions tab, then the Installed sub-tab, to verify the platfom software version.
    If the Version field shows the correct platform version number, the control plane is correctly installed.
  4. Check for any port number conflicts for ports 30086, 30529, 30671, 30983, 30984, 30998, 31000, 32400, and 32500 by running the following command:

    kubectl get service -n istio-system istio-ingressgateway
    Alternatively, check one or more specific ports using the following command:
    kubectl get service -A | grep -e <port number A> -e <port number B>
  5. If a record is found, edit istio-ingressgateway and alter the port number accordingly:

    kubectl edit service -n istio-system istio-ingressgateway

Results

When all port number conflicts are resolved, you are ready to install IIoT Core Services.

Installing IIoT Core Services in a cluster

You can install and configure IIoT Core Services in your cluster using the following instructions.

Start IIoT Core Services installation

This procedure walks you through the first part of the IIoT Core Services installation process.

Before you begin

  • Configure the load balancer to forward traffic to IIoT Core Services.
  • If you are reinstalling IIoT Core Services, first uninstall any current instance of the software on your system to conserve system resources.
  • Complete all prerequisites as described in IIoT Core Services prerequisites.
  • Configure a Kubernetes cluster using one of the options provided in Setting up the Kubernetes cluster. This includes downloading the installer and Docker images and extracting them on the installer node.

Procedure

  1. Log in to the installation node.

  2. Navigate to iiot-installer-release-5.1.0.

  3. Install the necessary libraries:

    export PATH=$PATH:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    ln -sf /usr/bin/python3 /usr/bin/python
    pip3 install -r requirements.txt
  4. Start the IIoT Core Services preinstallation and installation process by running the following script:

    ./iiot_install.sh

    The software license terms appear.

  5. Read through the license, pressing Enter to scroll down. At the end of the license, enter y(es) at the prompt to agree to install.

    The main installer menu appears. Use this menu to Configure IIoT Core Services.

Configure IIoT Core Services

You can configure the cluster, add users, and configure storage and other settings using the IIoT Core Services installer.

Procedure

  1. From the main installer menu, enter 1 for Node Configuration.

  2. From the Nodes menu, enter 1 for Add Nodes to configure the cluster.

    Enter the requested information:
    FieldDescription
    HostnameEnter the hostname of the node being added.
    IP addressEnter the IP address for the node.
    Role
    • On-premises: Type "master" for all nodes.
    • Cloud: Type "master" for at least one node. Label the remaining nodes "worker."
    Select the option to Return to the Main Menu.
  3. Enter 2 for Load Balancer Configuration to configure the load balancer.

    Enter 1 to Add / Edit Load Balancer.
    FieldDescription
    FQDNEnter the fully qualified domain name (FQDN) of the load balancer.
    HostnameEnter the hostname of the load balancer.
    IP addressEnter the IP address of the load balancer.
    Select the option to Return to the Main Menu.
  4. Enter 3 for Profile Configuration to configure user, deployment, and storage profiles and other settings.

  5. In the Profile Configuration menu, enter 1 for User Profiles to create users.

    Enter 1 to Add User(s) and specify the following information for each new user:
    FieldDescription
    UsernameEnter a username for the new user.

    You can use, modify, or delete the following pre-configured users as needed:

    • Admin username: hiota

      Password: Change11me

    • Technician username: hiota_read

      Password: Change11me

    PasswordEnter a password for the new user.
    EmailEnter the email address for the new user.
    RoleEnter the user role: admin, technician, or operator.
    You must add at least one admin user and one user in another role to gain access to the IIoT Core Services user interface after installation.

    Select the option to Return to the Main Menu.

  6. Enter 2 for Service Passwords.

    For each service, either press Enter to accept the password that is generated, or type your own password and then press Enter. The passwords you create must have at least eight characters, one lowercase letter, one uppercase letter, and one number. When you create your passwords, copy and store them for easy access. If you forget the generated passwords, you can find them in the Kubernetes dashboard.Select the option to Return to the Main Menu.
  7. Enter 3 for Deployment Profiles and 1 for Edit Profile.

    In the Edit Profile menu, select a deployment profile.
    ProfileDescription
    development (default)Enter 1 to select the development environment and safely test your deployment without affecting the production environment.
    productionEnter 2 to select the production environment.

    Whether you select development or production in this menu, the minimum requirements in Preparing for IIoT Core Services installation apply. Your selection affects the number of replicas and requests and limits for CPU and memory.

    Select the option to Return to the Main Menu.

  8. Enter 4 for Storage Profiles.

    Enter 1 to Edit / Review PVC ReadWriteMode or keep the default setting:
    PVC ReadWriteModeDescription
    ReadWriteOnce (default)

    Mount the volume as read-write by a single node.

    Use this option for VMWare CNS or GCR.

    ReadWriteMany

    Mount the volume as read-only by many nodes.

    Use this option for HSPC.

    For option 2, Edit / Review Service Storage Partitions, enter the storage size for each service needed for your applications.

    Select the option to Return to the Main Menu.

  9. Enter 5 for Cloud / On Prem & DTR Configuration.

    FieldDescription
    Enable / Disable - Cloud / On Prem InstallEnter true to enable on-premises install or false to enable cloud install.
    View / Edit Core Services Image Registry ConfigurationEnter the FQDN URL, port number, current username and password for the core registry. This information is required even if you select Cloud.

    Select the option Return to Main Menu option.

  10. To enable RBAC, enter 6 for Optional Features, then Edit / Review RBAC.

    The setting to install RBAC is disabled by default. Toggle on or off by entering y(es) or n(o).Select the Return to Main Menu option to return to the Profile Configurations menu, then again Return to Main Menu to return to the start menu.
  11. Enter 4 for Optional Installations.

    The setting to install optional services is turned off by default. Toggle on or off by entering true or false for each service:
    ProfileDescription
    KnativeDeploy, run, and manage serverless, cloud-native applications with Kubernetes.
    ML ServiceDeploy cloud-based machine learning tools. See table below for Machine learning service configuration options. For more information about Machine learning service and resource requirements, see Machine learning service resource requirements.
    Digital Twin BetaActivate Digital Twin mode and the ability to add digital twin objects that are installed with IIoT Core.
    NoteTo activate this option, you must also select the ML Service option.
    NoteDigital Twin Beta is currently an experimental feature in IIoT Core Services.

    The Machine learning service options include the following:

    ML Service OptionDescription
    NFS_ServerSpecify storage size for file sharing using a Network File System (NFS) server. The default size is 9 GB.
    Model_LifecycleEnter true or false to enable or disable model lifecycle management.
    Model_ManagementEnter true or false to enable or disable model management.
    Model_ServerEnter true or false to enable or disable a model server.
    NotebookEnter true or false to enable or disable Jupyter Notebook.
    RayEnter true or false to enable or disable Ray for Machine learning service scaling.
    Select Return to Optional Installations Menu.
  12. Enter 10 to Validate Configuration Parameters.

  13. Return to the Main Menu and enter 10 to Exit Installer Menu.

    The installer checks if all the parameters are correctly set.

Results

The installation process begins. Let it complete before performing any other action. After successful installation, the information entered in the menu is saved as a Kubernetes secret named hiota-installation-configuration-values-secret in the hiota namespace.

Perform core post installation tasks

You can perform the following post installation tasks to verify that the IIoT Core Services installation is successful.

Sign into the IIoT Core Services UI and verify that IIoT Core Services services are running.

Push solutions to Solution Management UI (GKE only)

For the solutions that you install with IIoT Core to display in the Solution Management UI, the corresponding Helm charts must be pushed to the GCR registry when the software is running on GKE.

Use the following template instructions

Procedure

  1. Log into the VM installation node as root.

  2. Save each Helm chart to the registry:

    helm chart save <image> <registry>/<solution-name>:<tag>
  3. Push each Helm chart using the following template:

    helm chart push <registry>/<solution-name>:<tag>
The following examples show how to apply the templates for various solutions:
helm chart save hiota-solutions-5.1.0 us.gcr.io/<registry>/hiota-solutions:5.1.0
helm chart push us.gcr.io/<registry>/hiota-solutions:5.1.0

helm chart save lumada-ml-model-lifecycle-0.1.0-22.tgz us.gcr.io/<registry>/lumada-ml-model-lifecycle:0.1.0-22
helm chart push us.gcr.io/<registry>/lumada-ml-model-lifecycle:0.1.0-22

helm chart save lumada-ml-model-server-0.1.0-36.tgz us.gcr.io/<registry>/lumada-ml-model-server:0.1.0-36
helm chart push us.gcr.io/<registry>/lumada-ml-model-server:0.1.0-36

helm chart save lumada-ml-model-management-1.0.0-b7.tgz us.gcr.io/<registry>/lumada-ml-model-management:1.0.0-b7
helm chart push us.gcr.io/<registry>/lumada-ml-model-management:1.0.0-b7

helm chart save lumada-ml-notebook-0.2.0-76.tgz us.gcr.io/<registry>/lumada-ml-notebook:0.2.0-76
helm chart push us.gcr.io/<registry>/lumada-ml-notebook:0.2.0-76

helm chart save mlaas-ray-0.2.0-10.tgz us.gcr.io/<registry>/mlaas-ray:0.2.0-10
helm chart push us.gcr.io/<registry>/mlaas-ray:0.2.0-10

Managing certificates in IIoT Core

Obtain a CA certificate for IIoT Core Services

IIoT Core Services uses a self-owned certificate that is not trusted by the operating system, browsers, and other applications. As a result, client services that connect to IIoT Core Services do not automatically trust its certificates. They must be configured with the IIoT Core Services certificate.

This section describes how you obtain the self-signed CA certificate that is used by the IIoT Core Services. You can add this to your operating system/browsers/applications truststore to make them to trust it.

Before you begin

The following is required to run the script:

ElementDescription
bashUnix shell (or similar)
curlHTTP(S) utility
jqJSON string processing utility
sedUtility to perform basic text transformations

Procedure

  1. Log in to the installation node.

  2. Navigate to the iiot-installer-release-5.1.0 directory.

  3. Run the script with the following arguments:

    appliance-ca.sh <appliance host address> <user name> [<certificate file path>]
    ArgumentDescription
    appliance host addressThe platform IP address.
    user nameThe name of the admin user authorized to access the certificate.
    certificate file path

    (Optional)

    The file path where the certificate should be saved.

Enhance HTTPS security by using a hybrid certificate solution

When you access IIoT Core Services, the cluster uses self-signed certificates by default, and you may see a “Not Secure” warning message from your web browser. To get rid of this warning message, you can opt for a hybrid certificate solution and convert the self-signed certificates to public-signed certificates.

NoteAn alternative solution is to add the CA certificates to your browser truststore (as mentioned in Obtain a CA certificate for IIoT Core Services).

Limitations of the hybrid certificate solution:

  • If you have an IIoT Gateway enrolled, the gateway will use the Hitachi Vantara self-signed CA certificate.
  • If you have Kafka installed, Kafka will use the Hitachi Vantara self-signed CA certificate.
  • The certificates are not renewed automatically. You need to renew them manually before they expire.
Update the cluster using a hybrid certificate solution

To update the cluster using a hybrid certificate solution and covert the self-signed certificates to public-signed certificates, perform the following steps.

NoteIgnore this process if you don’t want to use the hybrid certificate solution and use cluster’s default self-signed certificates.

Procedure

  1. Get the following public-signed certificates from your Certificate Authority (CA).

    First certificateSecond certificate
    The first certificate has the cluster’s FQDN as the CN name. Add lumada-edge.hiota.{FQDN} in the SAN.The second certificate has lumada- edge.hiota.{FQDN} as the CN name.
  2. Perform the following steps after you receive the public-signed certificates from your CA.

    First certificateSecond certificate
    1. Save the root CA certificate to the ca.crt file.
    2. Save the private key to the tls.key file.
    3. Save the signed certificate to the tls.crt file.
    1. Save the private key to the hiota.tls.key file.
    2. Save the signed certificate to the hiota.tls.crt file.
  3. Copy the following files to cluster.

    • ca.crt
    • tls.key
    • tls.crt
    • hiota.tls.key
    • hiota.tls.crt

    It is assumed that:

    • Both certificates are signed by the same CA.
    • Both tls.crt and tls.key files must be in PEM format. If the files are not in PEM format, you need to convert them.

      If you get a file in the pkcs#12 format (.pkx), you can run the following command to convert it to PEM format and then rename it as specified above.

      <<# openssl pkcs12 -in filename.pfx -out cert.pem -nodes>>
  4. Copy all the scripts from this directory to the same directory where where you copied the ca.crt, tls.key and tls.crt files.

  5. Update the istio-system certificates by running the following script and put the FQDN of the cluster as the parameter. For example, if the FQDN of your cluster is my.example.fqdn, you can run the following script:

    ./update_istio_certificate.sh my.example.fqdn
  6. Back up the certificates on your cluster.

    NoteThis step is only needed when you update the certificate for the first time. Skip this step during the certificate renewal process.
    1. Run backup_and_delete_certificate.sh to back up the certificate files on the cluster and delete the old certificates managed by cert-manager.

    2. Copy all the YAML files as a back up. These files will be required during the certificate renewal process.

  7. Update the certificates for all services by running the following script using the cluster FQDN as the parameter. For example, if your cluster FQDN is my.example.fqdn, you can run the following script:

    ./update_certificate.sh my.example.fqdn
Renew certificates using a hybrid certificate solution

In the hybrid certificate solution, the public-signed certificates are not renewed automatically by cert-manager. You need to renew them manually before the expiration date.

NoteThe self-signed certificates are renewed automatically.

Perform the following steps to renew the public-signed certificates.

Procedure

  1. Copy the YAML files that were backed up when obtaining the hybrid certificates, as stated in the step 6 of Update the cluster using a hybrid certificate solution. Run backup_adn_delete_certificate.sh to backup the YAML files and save them into the directory you are going to update.

  2. Get the following public-signed certificates from your Certificate Authority (CA).

    First certificateSecond certificate
    The first certificate has the cluster’s FQDN as the CN name. Add lumada-edge.hiota.{FQDN} in the SAN.The second certificate has lumada- edge.hiota.{FQDN} as the CN name.
  3. Perform the following steps after you receive the public-signed certificates from your CA.

    First certificateSecond certificate
    1. Save the root CA certificate to the ca.crt file.
    2. Save the private key to the tls.key file.
    3. Save the signed certificate to the tls.crt file.
    1. Save the private key to the hiota.tls.key file.
    2. Save the signed certificate to the hiota.tls.crt file.
  4. Copy the following files to cluster.

    • ca.crt
    • tls.key
    • tls.crt
    • hiota.tls.key
    • hiota.tls.crt

    It is assumed that:

    • Both the certificates are signed by the same CA.
    • Both tls.crt and tls.key must be in PEM format. If the files are not in PEM format, they must be converted.

      If you get a file in the pkcs#12 format (.pkx), you can run the following command to convert it to PEM format and then rename it as specified above.

      <<# openssl pkcs12 -in filename.pfx -out cert.pem -nodes>>
  5. Copy all the scripts from this directory to the same directory where where you copied the ca.crt, tls.key, and tls.crt.

  6. Update the istio-system certificates by running the following script using the cluster FQDN as the parameter. For example, if the cluster FQDN is my.example.fqdn, you can run the following script:

    ./update_istio_certificate.sh my.example.fqdn
  7. Update the certificates for all services by running the following script using the cluster FQDN as the parameter. For example, if the cluster FQDN is my.example.fqdn, you can run the following script:

    ./update_certificate.sh my.example.fqdn

Obtain SSL certificate

You can obtain an SSL certificate for debugging purpose. To get an SSL certificate, use the following instructions.

FunctionCommand
Get the ROOT certificate
  • #Get Root certificate (root.crt):

    kubectl get secrets -n istio-system ca-key-pair -o jsonpath='{ .data.tls\.crt }' | base64 -d > ./root.crt

  • #Get Root key (root.key):

    kubectl get secrets -n istio-system ca-key-pair -o jsonpath='{ .data.tls\.key }' | base64 -d > ./root.key

Get the GW certificate (MQTT)
  • #Export:

    export GW_IP='<IP Address>'

  • #Echo:

    echo $GW_IP

  • #OpenSSL:

    openssl s_client -connect $GW_IP:30883 -showcerts </dev/null 2>/dev/null|openssl x509 -outform PEM > ./GW_${GW_IP}_MQTT-cert30883.pem

  • #Explore certificate:

    openssl s_client -connect $GW_IP:30883 -showcerts </dev/null 2>/dev/null|openssl x509 -outform PEM | openssl x509 -text

Get the Kafka certificate
  • #Get Kafka user credentials:

    kubectl -n kafka get secret kafka-user-credentials -o yaml > kafka-user-credentials.yaml

  • #Get TLS:

    cat kafka-user-credentials.yaml | grep tls.jks | awk '{print $2}' | base64 -d > tls.jks

  • #Get password:

    cat kafka-user-credentials.yaml | grep password | awk '{print $2}' | base64 -d > password

Get the AMQP JKS certificate
  • #Get Root certificate (root.crt):

    kubectl get secrets -n istio-system ca-key-pair -o jsonpath='{ .data.tls\.crt }' | base64 -d > root_cert_lumada.cer

  • #Import keytool:

    keytool -import -alias root_cert_lumada -file root_cert_lumada.cer -keystore mykeystore.jks -keypass changeit -storepass changeit

Check SSL certificate expiration

You need to check the SSL certificate expiration status to ensure that it is correctly installed, valid, trusted and doesn't give any errors. Use the following process for checking the SSL certificate expiration status.

Procedure

  1. Retrieve the certificates list saved as a Kubernetes secret in the specified namespace. Run the following command to get the list:

    kubectl get -n ${namespace} secret|grep tls

    For example, to get certificate list saved as a Kubernetes secret in the hiota namespace, run the following command:

    kubectl get -n hiota secret|grep tls
  2. After you find out the secret name of the certificate from the list, run the following command to check the certificate expiration date:

    kubectl -n hiota get secret $(secret name} -o=jsonpath='{.data.tls\.crt}'|base64 -d |openssl x509 -noout -dates

    For example, if you want to check the influxdb certificate expiration, run the following command:

    kubectl -n hiota get secret hiota-influxdb-secrets -o=jsonpath='{.data.tls\.crt}'|base64 -d |openssl x509 -noout -dates

    The certificate expiration date will be displayed in the following format:

    notBefore=Dec 9 23:39:21 2022 GMT

    notAfter=Dec 4 23:39:21 2023 GMT

    You can check whether the certificate is expired by using the notBefore and notAfter date.

Sign into the IIoT Core Services UI

You can access the IIoT Core Services UI by signing into its web-based user interface.

The first time you sign in you need the credentials provided by your administrator.

Procedure

  1. Navigate to https://lumada-edge.hiota.<cluster_fqdn>:30443

    where: cluster_fqdn is the fully qualified domain name (FQDN) for the cluster (usually that of the load balancer).
  2. Enter Username and Password.

    If you get an Access Denied message, you do not have the required permissions. Contact your administrator to set up access. IIoT Core Services login screen

Results

The IIoT Core Services UI opens.

Configuring Modbus

IIoT Core Services includes a Modbus adapter that can be configured for TCP communication between a IIoT Gateway and IIoT Core Services.

Install Modbus

You can perform the following steps to install the Modbus adapter with IIoT Core to connect to your devices.

Procedure

  1. Navigate to the directory where the Modbus installation file is located and extract the contents of the TAR file.

    tar xvf hiota-modbus-lib-5.1.0.tgz

    A new directory is created: Hiota-Modbus-Lib-Installer-5.1.0.

  2. Navigate to the new directory:

    cd Hiota-Modbus-Lib-Installer-5.1.0
  3. Use the following command to enable file executable permissions for the installer script:

    chmod +x Installer.sh
  4. Run the installer:

    ./installer.sh <cluster-fqdn>

Results

Modbus is installed and ready to configure.

Configure a Modbus adapter

You can perform the following steps to configure the installed Modbus adapter.

Procedure

  1. Enroll a IIoT Gateway on IIoT Core Services as described in Registering and provisioning IIoT Gateway using CLI.

  2. Add a Modbus datamap to the enrolled device.

    The information is stored in the subordinate Modbus server in four different tables. Two tables store on/off discrete values (coils) and two store numerical values (registers). The coils and registers each have a read-only table and read-write table.
    Primary tablesData typeAddress rangeNumber of recordsTypeNotes
    Coil1 bit00000-0999910000 (0x270F)Read-WriteThis type of data can be changed by an application.
    Discrete input1 bit10000-1999910000 (0x270F)Read-OnlyThis type of data can be provided by an I/O system.
    Input registers16 bits30000-3999910000 (0x270F)Read-OnlyThis type of data can be provided by an I/O system.
    Holding registers16 bits40000-4999910000 (0x270F)Read-WriteThis type of data can be changed by an application.
  3. From the Data Route tab in the IIoT Core Services UI, create a Modbus route.

    When creating the data route, select HIOTA for the Data Type in the Data Profile section. HIOTA is the IIoT Core Services Common Data Model (CDM). For more information about CDM, see Common Data Model.

Results

You can now receive Modbus data using IIoT Core Services.

Example: Configure multiple Modbus connections

With IIoT Core Services, you can configure multiple Modbus servers for each gateway. The following example shows how to configure two gateways with multiple Modbus servers.

GUID-635D365F-F3D5-40A7-B4D5-4BD5B2C3CCFE-low.png

Procedure

  1. Enroll two gateways on IIoT Core Services as described in Install and provision an IIoT Gateway.

  2. In the IIoT Core Services UI, select the Device tab.

  3. Deploy the Modbus adapter on each gateway device.

    Follow the instructions in Deploy protocol adapter configurations on a gateway.
    1. Select one of the gateways to be deployed.
      NoteYou can only select devices that are online and in Ready state.
    2. Select Deploy Deploy Protocol Adapter to view the Adapter Configuration page.
    3. In the Adapter Type list, select Modbus.
    4. Paste your Modbus adapter configuration YAML into the Insert Adapter Configuration File text box.

      Example:

      1   - name: "custom-application-1"
      2  tags:
      3    - name: "velocity"
      4      type: "int16"
      5      slaveAddr: 0x00; 
      6      tableName: HoldingRegister
      7      startAddr: 0x00
      8      isBigEndian: true
      9    - name: "temperature"
      10      type: "int16"
      11      slaveAddr: 0x00
      12      tableName: HoldingRegister
      13      startAddr: 0x01
      14      isBigEndian: true
      15    - name: "a_03_int8"
      16      type: "int8"
      17      slaveAddr: 0x00
      18      tableName: HoldingRegister
      19      startAddr: 0x02
      20      isBigEndian: true
      21    - name: "a_04_uint8"
      22      type: "uint8"
      23      slaveAddr: 0x00
      24      tableName: HoldingRegister
      25      startAddr: 0x03
      26      isBigEndian: true
      27    - name: "a_05_int32"
      28      type: "int32"
      29      slaveAddr: 0x00
      30      tableName: HoldingRegister
      31      startAddr: 0x04
      32      isBigEndian: true
      33    - name: "a_06_float"
      34      type: "float"
      35      slaveAddr: 0x00
      36      tableName: HoldingRegister
      37      startAddr: 0x06
      38      isBigEndian: true
      
      FieldDescription
      nameTag name.
      typeData type: bool, int8, uint8, int16, uint16, int32, uint32, int64, uint64, float, double, string.
      slaveAddrThe Modbus device addresses a specific subordinate device by placing the 8-bit subordinate address in the address field of the message (RTU mode). The address field of the message frame contains two characters (in ASCII mode), or 8 binary bits (in RTU mode). Valid addresses are from 1-247. Subordinate address 0 is used for broadcasts.
      tableNameName of primary table. The value is found in {‘Coils’, ‘DiscreteInput’, ‘HoldingRegisters’, ‘InputRegisters’}.
      startAddrStart address for tableName. The range is 0-9999.
      isBigEndianBoolean. Modbus is a “big-endian” protocol, that is, the more significant byte of a 16-bit value is sent before the less significant byte. In some cases, however, 32-bit and 64-bit values are treated as being composed of 16-bit words, and transfer the words in "little-endian" order. For example, the 32-bit value 0x12345678 would be transferred as 0x56 0x78 0x12 0x34. You can chose the little-endian mode when isBigEndian is set to false.
    5. Click Deploy.
  4. Select the Data Route tab to create a data route for each Modbus connection.

    NoteGeneral instructions for how to create a data route are found in Creating a data route.
    1. Click Create Data Route.

      The Create Data Route page opens.

    2. Enter a Name for the data route.
    3. Select the Asset that you are collecting data from.
    4. For Device Type, select Gateway.
    5. For Device, select the name of the gateway.
    6. Enter a Trace ID or keep the default Trace ID that is based on the asset name.
    7. For Data Type, select HIOTA.
    8. In the Data Source section, select Modbus from the Protocol field and the corresponding Hostname / IP Address and Port.
    9. In the Data Destinations section, add the data destination details for one or more data destinations.
    10. Click Save and Deploy.
    Repeat for the remaining data routes.

Results

When the five data routes have been successfully deployed, the Status column in the data route inventory shows Deployed (Connected) for each data route.

Configuring Data Catalog integration

Data Catalog is an optional software component that can be used with IIoT Core to perform data profiling and analyze the content and quality of the data.

To use Data Catalog, contact your Hitachi Vantara sales representative and purchase a separate license.

Prerequisites for configuring Data Catalog integration

Before the Data Catalog integration can be configured, you must complete the following prerequisites:

  • Determine which databases to use as data destinations in the data route. Supported databases are Postgres, MinIO, and MongoDB for both default and external databases. For default databases, choose a Lumada Default Setting database as a data destination.
    NoteFor MongoDB, the Lumada Default Setting is currently not supported. To use an external MongoDB, select the non-default MongoDB as a data destination.

    For information on how to create data routes in IIoT Core Services, see Manage data routes.

  • Install and deploy IIoT Core Services v5.1. For instructions, see Install and configure IIoT Core Services.
  • Install and deploy Data Catalog v7.3. Refer to the Data Catalog user documentation.
  • On the host where IIoT Core Services is installed, verify that you have access to the <IIoT Core Services installation location>/ldc directory, which contains integration-related configuration and a setup script.
  • In addition to alphanumeric characters, only spaces, hyphens, and underscores are supported in column names. Data Catalog jobs will fail if a column name contains any other special characters.

Set database certificates in Data Catalog

If the databases that store IIoT Core data use publicly signed certificates or if no certificates are required for access, there is no need to update the Data Catalog deployment, as long as the CA certificates are in the Java trusted store where Data Catalog is running.

If you are using self-signed certificates or private CA-signed certificates for access, you need to export any certificates to Data Catalog as described in the following instructions.

Procedure

  1. Export the self-signed certificate of the corresponding database and save it in PEM format. Multiple certificates can be chained as single text value.

    Use the following command to extract the certificate from a MinIO database in IIoT Core:
    openssl s_client -showcerts -connect <FQDN>:<port> </dev/null 2>/dev/null | openssl x509 -outform PEM
    VariableSetting
    FQDN

    For an internal database: the fully qualified domain name (FQDN) for the cluster.

    For an external database: the FQDN of the database server host.

    port

    For an internal database: the default port number in the databases table at Message brokers, databases, and external-facing services.

    For an external database: the database server port.

  2. Navigate to the path where the custom_values.yaml file is located in the Data Catalog deployment and add the remote server certificate obtained in the previous step.

    Example:
    agent:
      extraCerts: |+
        -----BEGIN CERTIFICATE-----
        MIIDqDCCApCgAwIBAgIEYYVSOTANBgkqhkiG9w0BAQsFADBbMScwJQYDVQQDDB5SZWdlcnkgU2Vs
        ****************************************************************************
        ********************************cut*****************************************
        ****************************************************************************
        27Su+O458c91NiUcATpaTgHEnYcbh8dhHhZVwg==
        -----END CERTIFICATE-----
        -----BEGIN CERTIFICATE-----
        Z2NwY2RuLmd2dDEuY29tggoqLmd2dDIuY29tgg4qLmdjcC5ndnQyLmNvbYIQKi51
        ****************************************************************
        ********************************cut*****************************
        ****************************************************************
        cQNSKiNbm5XLjx5Rcgz1PG55uW1yDMLj8lE9+8wr
        -----END CERTIFICATE-----
    app-server:
      extraCerts: |+
        -----BEGIN CERTIFICATE-----
        MIIDqDCCApCgAwIBAgIEYYVSOTANBgkqhkiG9w0BAQsFADBbMScwJQYDVQQDDB5SZWdlcnkgU2Vs
        ****************************************************************************
        ********************************cut*****************************************
        ****************************************************************************
        27Su+O458c91NiUcATpaTgHEnYcbh8dhHhZVwg==
        -----END CERTIFICATE-----
        -----BEGIN CERTIFICATE-----
        Z2NwY2RuLmd2dDEuY29tggoqLmd2dDIuY29tgg4qLmdjcC5ndnQyLmNvbYIQKi51
        ****************************************************************
        ********************************cut*****************************
        ****************************************************************
        cQNSKiNbm5XLjx5Rcgz1PG55uW1yDMLj8lE9+8wr
        -----END CERTIFICATE-----  
  3. Update the Data Catalog deployment so it can communicate with the IIoT Core database by running the following command:

    helm upgrade ldc7 -n <ldc namespace> ldc.7.x.x.tgz -f <path>/custom_values.yaml --version="xxx"

Verify Data Catalog user role permissions

Define a user role in Data Catalog with the permissions API Access and Manage Business Glossary.

Procedure

  1. In the Data Catalog Keycloak client, your Data Catalog service user or site administrator must assign a role to a user and record the username and password.

  2. Log in to Data Catalog using the credentials shared by your Data Catalog service user or site administrator.

  3. Navigate to Management User Roles steward and select the following permissions:

    • Manage Business Glossary
    • API Access
    Role permissions in Data Catalog

Enable Data Catalog integration

After you configure the Data Catalog integration and verify Data Catalog user role permissions, you can activate the Data Catalog integration with IIoT Core Services.

Procedure

  1. Log in to IIoT Core Services as root.

  2. On the host where IIoT Core Services is installed, go to the <IIoT Core Services installer location>/ldc directory.

  3. Gather the required Data Catalog information from the README.txt file, then run the following script to enable the integration:

    bash ldc-setup.sh <ldc-cluster-fqdn-or-ip-address>
    The script will configure the proper Data Catalog settings in IIoT Core Services. Use the Data Catalog username and password that is mapped to the role with both API Access and Manage Business Glossary permissions set as described in Verify Data Catalog user role permissions.

View IIoT Core data in Data Catalog

When the Data Catalog integration is enabled, you can view IIoT Core asset and data route information in Data Catalog.

Any updates to this data in IIoT Core are updated in real time in Data Catalog.

CautionThe synchronization is one way only, from IIoT Core to Data Catalog. If more changes occur in Data Catalog after the asset and data routes are synchronized, those changes are not synchronized back to IIoT Core.
Asset synchronization

The assets in IIoT Core are automatically imported into Data Catalog as business terms with the format <asset name>_<first 8 digits of asset id>. This includes any asset hierarchies.

Whenever an asset is created, updated, or deleted in IIoT Core, the corresponding Data Catalog business term is created, updated, or deleted.

The following is an example of IIoT Core asset information as it appears in Data Catalog under the Business Glossary:

IIoT Core assets in Data Catalog Business Glossary

Data route synchonization

When a data route is created, updated, or deleted in IIoT Core and the data destination is set to Postgres, MinIO, or MongoDB, the corresponding database is created, updated, or deleted in Data Catalog. In Data Catalog, the database is referred to as a data source. This includes both default databases and custom databases.

NoteData routes in IIoT Core that are created before the integration with Data Catalog is enabled do not appear in the Data Catalog UI.

The following is an example of IIoT Core data route information as it appears in Data Catalog under Data Sources:

IIoT Core data routes in Data Catalog Data Sources

Data actions in Data Catalog

In Data Catalog, you can perform a variety of analytics operations on IIoT Core data sources.

For example, you can run scanning, profiling, and discovery jobs by submitting job templates or sequences against the synchronized IIoT Core data sources.

For information about Data Catalog features and capabilities, see the Data Catalog user documentation.

Install Kafka

You can install Kafka v2.13-3.1.0 with IIoT Core Services 5.1 as an optional component.

Before you begin

  • For any previous installation of IIoT Core Services that included Kafka, verify that Kafka and Zookeeper resources are completely uninstalled before executing this procedure by running the following commands:
    kubectl -n kafka get kafkausers
    kubectl -n kafka get kafkatopics
    kubectl -n kafka get kafkaclusters
    kubectl -n kafka get cruisecontroloperations
  • Complete the installation of the IIoT Core platform components and IIoT Core Services.

Procedure

  1. Log in as root user on the installation node.

  2. Navigate to the IIoT Core Services installation directory where the Kafka installation script is located:

    cd <iiot-core-installer-dir>
  3. Run the Kafka installer:

    ./iiot_kafka_install.sh
  4. (Optional) Verify that Kafka has been properly installed by running the following commands:

    kubectl -n kafka get pods 
    kubectl -n zookeeper get pods
    Verify that the Kafka pods (kafka cluster, cruise control, kafka-operator) and the Zookeeper pods (zookeeper, zookeeper-operator) are running successfully.

Results

Kafka is installed and ready to use.

Shut down a Kubernetes node in a three-node cluster

You can perform a graceful shutdown of a cluster node for the purpose of node maintenance, troubleshooting, or other interventions.

Procedure

  1. Shut down and drain the cluster node by running the following command:

    kubectl drain <nodename> --ignore-daemonsets --delete-local-data
    NoteDraining safely evicts or deletes all pods except mirror pods to make them unschedulable in preparation for maintenance. The drain function waits for graceful termination. Do not operate on the node until the command completes.
  2. Wait for the drain to complete.

    If a pod is stuck in Init state, even after waiting eight minutes, delete the pod and wait a few minutes while the pod comes online. Run the following command to delete the pod:

    kubectl delete pod -n <namespace> <podname>

    If a pod is stuck in the Terminating state after a node is drained, run the following command to force delete the pod.

    Run the following command:

    kubectl delete pod -n <namespace> <podname> --grace-period=0 --force
  3. Confirm that all pods are running on the other two nodes by running the following command:

    kubectl get pods --all-namespaces -o wide

    Wait eight minutes for a pod to be up and running after it has been rescheduled to another node.

  4. Verify volume attachments and ensure that there are no volume attachments left on the drained node by running the following command:

    kubectl get volumeattachments -o custom-columns=VANAME:metadata.name,NODENAME:spec.nodeName

    Sample output:

    VANAME NODENAME
    csi-0232c4c79c3205b45c15eb2a60e61878df9ef6e546a8d98c7fc2c49619c2af7d NodeB
    csi-3fe0b6b87271201ad9b4f065a49894ac3ee5c8ed67f17ad2766177d58d5092d7NodeC
    csi-4bb22d7f2fcf9f59faba8560cbc37384127bcab09f381dda8ea65f31675a34b7 NodeB
    csi-6a02010d32147f167126f16b1baf8f56fff447df29b6446820cb443fb42199af NodeA
  5. If you still see volume attachments with NODENAME = NodeA (the drained node), delete the volumeattachment with the following command:

    kubectl delete volumeattachments csi-xxxx
  6. Repeat step 5 to delete all volume attachments left in the node you drained.

  7. Verify that there is nothing in the multipath directory by running the following command on NodeA (the drained node):

    multipath -ll
  8. Restart NodeA (the drained node) to put it back in service using the following command.

    kubectl uncordon <nodename>

Upgrade IIoT Core Services from 5.0 to 5.1

Use the following procedures to upgrade an existing installation of IIoT Core Services from v5.0 to v5.1.

The upgrade will move database assets without changing anything or without performing any migration.

Prepare cluster nodes for 2.4.1 platform components upgrade

You must complete the following steps for each node in the Kubernetes cluster so that you can upgrade IIoT Core Services platform components to v2.4.1. IIoT Core Services platform components are also referred to as Foundry.

Before you begin

For upgrades to IIoT Core Services, it is a best practice to stop any ingress of the data pipeline into IIoT Core Services.

Procedure

  1. Log in as a root user to a Kubernetes node in the cluster that is being upgraded.

  2. Create a kubeconfig file for the Kubernetes node by running the following commands:

    sed -i -e 's/localhost/<kubernetes_node_ip>/' ~/.kube/config
    kubectl get nodes -o wide
  3. Copy the Core-DTR CA into the truststore of the Kubernetes node operating system by running the following commands:

    cp /etc/docker/certs.d/<load-balancer-address>\:32400/ca.crt /etc/pki/ca-trust/source/anchors/<load-balancer-address>-32400-ca.crt
    update-ca-trust
    trust list | grep hitachi

Next steps

Prepare installer VM for 2.4.1 platform components upgrade

Prepare the installer VM so that you can upgrade IIoT Core Services platform components to v2.4.1.

Before you begin

Verify that the installer VM has the required software installed on it, as described in the Installation VM prerequisites section of IIoT Core Services prerequisites.

Procedure

  1. Prepare passwordless SSH login from the installer VM to all Kubernetes nodes by running the following commands on each node:

    ssh-keygen
    ssh-copy-id root@<kubernetes_node_ip>
  2. Download the kubeconfig file that you modified for the upgrade from a node that contains the file by using the following commands:

    scp root@<kubernetes_node_ip>:~/.kube/config ~/.kube/config
    kubectl get nodes -o wide
    helm list -A
    NoteFor information about modifying the kubeconfig file, see Prepare cluster nodes for 2.4.1 platform components upgrade.
  3. Download the Core-DTR CA cert to the installer VM by using the following commands:

    mkdir -p /etc/docker/certs.d/<FQDN>\:32400/
    scp -r root@<kubernetes_node_ip>:/etc/docker/certs.d/<FQDN>\:32400/ /etc/docker/certs.d/
  4. Test the Docker connection to the Core-DTR by logging in to the Core-DTR with the following command:

    docker login <FQDN>:32400 -u <user name> -p <password>

Next steps

Prepare to upgrade platform components to v2.4.1 by performing one of the following procedures:

Prepare to upgrade platform components to 2.4.1 in on-premises cluster

Prepare to upgrade IIoT Core Services platform components to v2.4.1 by downloading the required files, verifying versions of required applications, and running commands to perform various tasks.

Before you begin

Verify that the following requirements are met:
  • The installer VM has the required software described in the Installation VM prerequisites section of IIoT Core Services prerequisites
  • IIoT Core Services platform components v2.3 is installed on the cluster.
  • IIoT Core Services v5.0 is installed.

Procedure

  1. Log in to the installer VM as a root user.

  2. On a node in the cluster, verify the version of istioctl by running the following command:

    istioctl version
  3. Verify the version of the following images by running the following commands:

    kubectl get sa -A | grep istio
    kubectl describe deployment admin-app -n hiota | grep Image
    kubectl describe deployment keycloakoperator -n hiota | grep Image
    kubectl describe deployment istiod -n istio-system | grep Image
    kubectl describe deployment cert-manager -n cert-manager| grep Image
  4. Verify that the installer VM is pointing to the cluster that is being upgraded by running the following command:

    kubectl get nodes -o wide
    NoteIf you do not see the correct nodes, the installer VM is not correctly configured to point to the cluster. For instructions on configuring the installer VM, see Prepare installer VM for 2.4.1 platform components upgrade.
  5. Go to https://support.pentaho.com, download the Foundry-Control-Plane-2.4.1.tgz file and save the file to the directory where you want to install the upgrade.

    NoteYou must perform the installation from a directory that is at least two levels from the root level.
  6. Untar the Foundry-Control-Plane-2.4.1.tgz file by running the following command:

    mkdir Foundry-Control-Plane-2.4.1
    tar xvfz Foundry-Control-Plane-2.4.1.tgz -C ./Foundry-Control-Plane-2.4.1
  7. To reduce upgrade time, remove istio-injection: enabled from the hiota namespace by running the following command:

    kubectl edit ns -n hiota
    NoteIf you do not remove istio-injection: enabled from the hiota namespace, the upgrade might take a very long time and can result in a timeout error. The long upgrade time and timeout error happens because all -sso-gatekeeper pods are restarted when you run the upgrade command, ./upgrade-cluster-services.sh.
  8. Upgrade cluster services by running the following command with a root user name and password:

    $ ./upgrade-cluster-services.sh -w service-type=NodePort -u <user name> -p <password>
    NoteUpgrading cluster services might take a long time.
  9. After the upgrade completes, add istio-injection: enabled back to the hiota namespace by running the following command:

    kubectl edit ns -n hiota
  10. Apply the Kubernetes custom resource definitions by running the following command:

    ./apply-crds.sh -r <FQDN>:32400 -u <user name> -p <password> --insecure
  11. Upload the new control plane charts and images by running the following command:

    ./upload-solutions.sh -C /<filepath>/Foundry-Control-Plane-2.4.1/charts/   -I  /<filepath>/Foundry-Control-Plane-2.4.1/images/ -n hiota
  12. Patch imagePullSecrets service account in the istiod-system namespace by running the following command:

    kubectl patch sa istiod -n istio-system -p '{"imagePullSecrets":[{"name":"istio-regcred"}]}'
  13. Obtain the IIoT Core Services platform components user name and password by running the following commands:

    echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.username}')
    echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.credentials[0].value}')

Results

You are now prepared to upgrade the platform components to v2.4.1.

Next steps

Prepare to upgrade platform components to 2.4.1 in GKE cluster

Prepare to upgrade IIoT Core Services platform components to v2.4.1 by downloading the required files, verifying versions of required applications, and running commands to perform various tasks.

Before you begin

Verify that the following requirements are met:
  • The installer VM has the required software described in the Installation VM prerequisites section of IIoT Core Services prerequisites
  • IIoT Core Services platform components v2.3 is installed on the cluster.
  • IIoT Core Services v5.0 is installed.

Procedure

  1. Log in to the installer VM as a root user.

  2. Verify that the installer VM is pointing to the cluster that is being upgraded by running the following command:

    kubectl get nodes -o wide
    NoteIf you do not see the correct nodes, the installer VM is not correctly configured to point to the cluster. For instructions on configuring the installer VM, see Prepare installer VM for 2.4.1 platform components upgrade.
  3. Go to https://support.pentaho.com, download the Foundry-Control-Plane-2.4.1.tgz file and save the file to the directory where you want to install the upgrade.

    NoteYou must perform the installation from a directory that is at least two levels from the root level.
  4. Untar the Foundry-Control-Plane-2.4.1.tgz file by running the following command:

    mkdir Foundry-Control-Plane-2.4.1
    tar xvfz Foundry-Control-Plane-2.4.1.tgz -C ./Foundry-Control-Plane-2.4.1
  5. Obtain the access token for the GKE cluster by running the following command:

    gcloud auth print-access-token
  6. To reduce upgrade time, remove istio-injection: enabled from the hiota namespace by running the following command:

    kubectl edit ns -n hiota
    NoteIf you do not remove istio-injection: enabled from the hiota namespace, the upgrade might take a very long time and can result in a timeout error. The long upgrade time and timeout error happens because all -sso-gatekeeper pods are restarted when you run the upgrade command, ./upgrade-cluster-services.sh.
  7. Upgrade cluster services by running the following command with the oauth2accesstoken user name and gcloud auth print-access-token password specified:

    ./upgrade-cluster-services.sh -w service-type=NodePort -u  oauth2accesstoken  -p '$(gcloud auth print-access-token)'
    NoteUpgrading cluster services might take a long time.
  8. After the upgrade completes, add istio-injection: enabled back to the hiota namespace by running the following command:

    kubectl edit ns -n hiota
  9. Apply the Kubernetes custom resource definitions by running the following command:

    ./apply-crds.sh -r <FQDN>:32400 -u <user name> -p <password> --insecure
  10. Upload the new control plane charts and images by running the following command:

    ./upload-solutions.sh -C /<filepath>/Foundry-Control-Plane-2.4.1/charts/   -I  /<filepath>/Foundry-Control-Plane-2.4.1/images/ -n hiota
  11. Patch imagePullSecrets service account in the istiod-system namespace by running the following command:

    kubectl patch sa istiod -n istio-system -p '{"imagePullSecrets":[{"name":"istio-regcred"}]}'

Results

You are now prepared to upgrade the platform components to v2.4.1.

Next steps

Upgrade platform components to 2.4.1

Upgrade IIoT Core Services platform components to v2.4.1 so that you can upgrade IIoT Core Services from v5.0 to v5.1.

Before you begin

Prepare to upgrade platform components to v2.4.1 by performing one of the following procedures:

Procedure

  1. Go to the Solution Management site at https://<FQDN>:30443/hiota/hscp-hiota/solution-control-plane/, and log in by using the IIoT Core Services platform components user name and password.

  2. In the Solution Management window, click the Solution Control Plane box.

    The solution control plane window opens.
  3. In the Installed tab of the solution control plane window, click the action menu icon GUID-860D1A4B-E960-45FE-81C1-4D6E31957DF8-low.png, and then click Upgrade.

    The Upgrade Solution window opens.
  4. In the Upgrade Solution window, click the Upgrade Version list, select 2.4.1, and then click Confirm.

  5. Click Upgrade.

    NoteThe upgrade takes approximately 20 minutes to complete. If there are errors during the upgrade, you can roll back to the previous version by clicking the action menu icon GUID-860D1A4B-E960-45FE-81C1-4D6E31957DF8-low.png and selecting a previous version.
  6. After the upgrade completes, verify that the 2.4.1 upgrade was applied by navigating back to the Solution Management site at https://<FQDN>:30443/hiota/hscp-hiota/solution-control-plane/.

    In the Solution Management window, the checkmark on the Solution Control Plane tile turns green, indicating that the 2.4.1 upgrade was applied. The release version updates to 2.4.1. You are now ready to upgrade IIoT Core Services from v5.0 to v5.1.

Next steps

Upgrade IIoT Core Services from 5.0 to 5.1

Upgrade an existing installation of IIoT Core Services from v5.0 to v5.1.

Before you begin

  • Back up the configuration.yaml file.
  • Verify that backups are available for the databases.
  • Download the latest version of the IIoT Core Services v5.1.0 installer and Docker images from https://support.pentaho.com.
  • Verify that the size of the backup data matches the size of the original data.

Procedure

  1. Verify that the IIoT Core Services image that you want to use for the v5.1 upgrade is installed on the cluster by navigating to <iiot_installation_directory>/iiot-installer-release-5.1.0/ and running the following command:

    kubectl get secret -n hiota hiota-installation-configuration-values-secret -o=jsonpath="{.data.configuration\.yaml}" | base64 --decode  > configuration.yaml.bak
  2. Log in to the installation node as a root user.

  3. Untar the IIoT Core Services installer and Docker images by running the following commands:

    $ tar -xvf iiot-installer-release-5.1.0.tgz
    $ tar -xvf iiot-docker-images-release-5.1.0.tgz
  4. Navigate to the new directory for the IIoT Core Services installer and Docker images by running the following command:

    cd iiot-installer-release-5.1.0
  5. Run the following commands to install the required software libraries:

    export PATH=$PATH:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    ln -sf /usr/bin/python3 /usr/bin/python
    pip3 install -r requirements.txt
  6. To avoid certification-related issues, run the following pre-update script:

    ./iiot_pre_update.sh  <FQDN>
  7. Start the IIoT Core Services v5.1.0 update procedure by running the following command:

    ./iiot_update.sh
  8. Manually upgrade Kafka by running the following command:

    ./iiot_kafka_upgrade.sh

Results

IIoT Core Services is successfully upgraded to v5.1.

Next steps

Log into the IIoT Core Services UI and perform the following verifications:
  • Verify that the IIoT Core Services version is v5.1.
  • Verify that the assets, routes, gateway, and devices appear correctly in the UI.

Uninstall IIoT Core Services

You can uninstall IIoT Core Services from the master node.

Before you begin

If IIoT Core Services is installed on vCenter vSphere, do not delete the VM before uninstalling.

Procedure

  1. Log in as a root user on the installation node.

  2. Navigate to the product installation directory:

    cd iiot-installer-release-5.1.0
  3. Run the uninstall script:

    ./iiot_uninstall.sh
  4. Select Y(es) to uninstall or N(o) to cancel.

    The uninstall script completes.
  5. Delete the folder iiot-installer-release-5.1.0.

Uninstall the platform components

After you have uninstalled IIoT Core Services, you can uninstall the IIoT Core platform components.

Procedure

  1. Log in as a root user on the installation node.

  2. Navigate to the product installation directory:

    cd <installation_dir>/Foundry-Control-Plane-2.4.1/bin
  3. Run the uninstall scripts:

    ./uninstall-control-plane.sh -n hiota
    ./uninstall-control-plane.sh -F
  4. Select Y(es) to uninstall or N(o) to cancel.

    The uninstall script completes.