Install and configure IIoT Core Services
This chapter describes important preparatory steps before installing IIoT Core Services, the installation process itself, and the necessary post-installation tasks.
Preparing for IIoT Core Services installation
Before you begin installing IIoT Core Services, you should review all of the information in this chapter, gather the required information, and complete any necessary installations.
Installation node requirements
As a best practice, install IIoT Core Services from a small VM node outside the cluster but within the same network environment.
The following minimum requirements apply to the VM installation node.
Hardware | Specifications |
CPU | Intel Atom or equivalent processor, 4 cores |
Memory | 16 GB |
Disk space | 200 GB |
IIoT Core Services supports the following operating systems on the VM installation node:
Software | Version |
Red Hat Enterprise Linux (RHEL) | 8.4 |
IIoT Core Services system requirements
IIoT Core Services is designed to be installed in a cluster with a minimum of three nodes.
The following table lists the minimum requirements for each of the cluster nodes (without Machine learning service):
Hardware | Specifications |
Number of nodes | 3 |
CPU |
16 vCore CPU per node Example: 2 Intel Xeon Scalable E5-2600 v5 or equivalent AMD processors, 64-bit, 8 cores |
Memory | 16 GB per node |
Disk space | 512 GB per node |
For IIoT Core with Machine learning service, the following minimum requirements apply:
Hardware | Specifications |
Number of nodes | 5 |
CPU |
32 vCore CPU per node Example: 2 Intel Xeon Silver 4110 CPU 8 cores @2.10 Ghz,16 threads, or high performance CPU |
Memory | 128 GB per node |
Disk space |
2 TB per node Minimum PVC size for the whole cluster: 4 TB |
IIoT Core Services supports the following operating system for cluster nodes:
Software | Version |
Red Hat Enterprise Linux (RHEL) | 8.4 |
Installation prerequisites
Observe the following prerequisites before installing IIoT Core Services. These are global prerequisites that apply to all IIoT Core Services components, including the platform component.
IIoT Core Services prerequisites
To install IIoT Core Services, complete the following prerequisites.
- Restart the nodes before installing Kubernetes. See the Kubernetes documentation for more information.
- Install and configure Kubernetes in a three-node cluster (five nodes recommended for Machine learning service).
- During installation, set all nodes as master nodes for redundancy in case a master node fails.
- Use the command option
-n hiota
when installing the IIoT Core Services platform components. The platform components must be installed in the same hiota namespace where IIoT Core Services is installed. - You must have a FQDN for the cluster.
The following software must be installed on the installation VM that is used when installing IIoT Core Services:
Software | Version |
Python | 3.6.8 |
Kubectl | v1.23 |
OpenSSL | N/A |
Helm | 3.6.3 |
Docker | 20.10.21 |
Observe the following specific prerequisites before installing the IIoT Core Services components.
Component | Requirement |
Kubernetes |
A secured Kubernetes system (v1.23) with a kubeconfig file for API server access. In addition to installing Kubernetes, optionally install and configure a Kubernetes dashboard. |
Default storage class |
To maximize solution portability, the Kubernetes system must declare a To verify that your Kubernetes cluster declares a |
Storage plugin |
Install a storage plugin with the following specifications: Googles Kubernetes Engine (GKE) Storage class: GKE standard Follow the instructions for creating a Kubernetes cluster using GKE in the Google documentation on the Kubernetes engine. On-premises Based on what best suits your hardware environment, choose one of the following options. Both options have been tested with IIoT Core Services 5.1.0:
|
Load balancer |
Set up a load balancer to forward requests to the Kubernetes cluster node for the following ports:
|
Registry requirements | See Registry requirements and Example of how to set up a Docker registry. |
nfs-utils |
Only applies to on-premises installations of IIoT Core Services when ML Services is selected as an optional installation: Either install a |
Databases | The following database versions are supported with the current version of IIoT Core:
|
Kafka | You must have a wildcard DNS record configured for accessing kafka-cluster-0-external.kafka.FQDN and kafka-cluster-1-external.kafka.FQDN . |
Registry requirements
IIoT Core Services requires a registry for container images that is OCI-compliant and has an SSL certificate.
When deploying both cluster services and the control plane, specify the fully qualified domain name (FQDN) of this registry, either using the -r
argument or the installer configuration file. The value you specify needs to include both the host and port for your registry. For example:
-r myregistry.example.com:6000
If your registry is available on port 443, you don't need to specify the port number.
If you are using a registry that supports multitenancy, you also need to include the specific location within the registry that you want to use. For example, if you are using Harbor, include the name of the Harbor project you want to use:
-r myharbor.example.com:6000/my_project
If the registry you are using is insecure (that is, it has a self-signed or otherwise not trusted SSL certificate), you must configure your Docker daemon on the installation node to allow the insecure registry.
This configuration is often done by adding the registry to the insecure-registries
section of /etc/docker/daemon.json, and restarting the Docker service. Configure the container runtime on the cluster to allow the insecure registry. Specify the -I
flag for install-control-plane.sh
and for install-cluster-services.sh
.
As a best practice, use a trusted, CA-signed certificate.
For information on setting up a non-production registry that meets requirements, see the following example.
Example of how to set up a Docker registry
This section walks you through the process of setting up a non-production, insecure registry that IIoT Core Services can use with Docker Registry.
A non-production (development-only) environment requires an OCI-compliant registry that uses HTTPS. To authenticate with the registry, use a username and password, not an auth plugin or credential helper.
Before you begin
The following must be set up before running the procedure:
- Docker
- OpenSSL command line
Procedure
Generate a self-signed OpenSSL certificate.
mkdir -p certs openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ -x509 -days 365 -out certs/domain.crt -subj "/CN=$(hostname -f)"
Start the Docker registry on port 5000 by passing a self-signed Open SSL certificate to the Docker registry.
docker run -d -v "$(pwd)"/certs:/certs \ -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -p 5000:5000 \ --name registry \ --restart unless-stopped \ registry:2
NoteFor additional options, see https://docs.docker.com/registry/deploying/.Ensure that the registry is included in the list of insecure registries in your container runtime.
Make sure to include<registry_hostname>:5000
ininsecure-registries
in /etc/docker/daemon.json.Create a /etc/docker/daemon.json file, if you don't already have one.
In this file, add
<registry_hostname>:5000
to a list of insecure registries:{ "insecure-registries" : [ "<registry_hostname>:5000" ] }
Restart Docker for the configuration changes to take effect:
systemctl restart docker
Run
For example:docker info
and verify that the list ofInsecure Registries
is correct.Client: Debug Mode: false Server: Containers: 150 Running: 67 Paused: 0 Stopped: 83 Images: 217 Server Version: 19.03.5-ce ... Insecure Registries: <registry_hostname>:5000 127.0.0.0/8 Live Restore Enabled: false
Test the registry by pulling, tagging, and pushing an image:
docker pull ubuntu docker image tag ubuntu $(hostname -f):5000/my-ubuntu docker push $(hostname -f):5000/my-ubuntu
If you see output similar to the following, your registry is working correctly:
Using default tag: latest latest: Pulling from library/ubuntu 423ae2b273f4: Pull complete de83a2304fa1: Pull complete f9a83bce3af0: Pull complete b6b53be908de: Pull complete Digest: sha256:04d48df82c938587820d7b6006f5071dbbffceb7ca01d2814f81857c631d44df Status: Downloaded newer image for ubuntu:latest docker.io/library/ubuntu:latest The push refers to repository [<registry_hostname>:5000/my-ubuntu] 1852b2300972: Pushed 03c9b9f537a4: Pushed 8c98131d2d1d: Pushed cc4590d6a718: Pushed latest: digest: sha256:0925d086715714114c1988f7c947db94064fd385e171a63c07730f1fa014e6f9 size: 1152
You can also list the contents of the registry using the following commands:
$ curl https://$(hostname -f):5000/v2/_catalog -k {"repositories":["my-ubuntu"]}
Results
Next steps
To remove this registry, run the following command:
docker stop registry; docker rm registry
Port configuration requirements (Core Services)
To use IIoT Core Services, you must provide access to the ports used by the system services and databases.
Message brokers, databases, and external-facing services
IIoT Core Services uses a combination of message brokers and RESTful services. Message brokers establish communication between applications and infrastructure for message queues and topics.
Check the following message brokers, databases, and other services to determine if the corresponding ports need to be open to run your IIoT Core Services. The needed ports must be open both on the load balancer and on each node.
Service | Description | Default port | Optional install | Default login | Links |
AMQP - RabbitMQ | Messaging over AMQP | 30671 | No | admin |
Documentation: https://www.rabbitmq.com/documentation.html |
Service | Description | Default port | Optional install | Default login | Links |
CouchDB | Unstructured data database access | 30084 | No | admin |
Documentation: http://docs.couchdb.org/en/stable/ UI: OR
OR
|
MongoDB | Unstructured data database access | 30017 | Yes | admin |
Documentation: https://www.mongodb.com/docs/ |
InfluxDB | Time-series data (historical data) database access | 30086 | No | admin |
Documentation: https://docs.influxdata.com/influxdb |
MinIO | Object storage database access | 31000 | No | admin |
Documentation: https://docs.min.io/docs/ UI: OR
OR
|
Service | Description | Default port | Optional install | Default login | Links |
Hiota Ingress REST | HTTPS data ingestion to IIoT Core | 30443 | No | N/A |
N/A |
Hiota Passport |
CouchDB Passport API access to Couch data | 30224 | No | N/A | N/A |
InfluxDB Time-series data (historical data) access | 30223 | No | N/A | N/A | |
PostgreSQL Structured data access | 30228 | No | N/A | N/A | |
Hiota Product APIs | Access to management plane APIs | 30443 | No | N/A | Documentation: See Management plane REST API |
OAuth-Helper | Simple OAuth handling | 30303 | No | N/A |
N/A |
Internal core services
Because the following ports are used by internal IIoT Core Services applications, verify that these ports are open to external access for the assigned IIoT Core Services to work properly.
Service | Description | Default port | Links |
Kafka | Kafka messaging support | 30091, 30092 |
Documentation: https://kafka.apache.org/intro |
RabbitMQ (https-UI) | UI for troubleshooting | 31671 |
Documentation: https://www.rabbitmq.com/documentation.html |
MQTT - RabbitMQ | Messaging over MQTT for gateway devices | 30884 |
Documentation: https://www.rabbitmq.com/documentation.html |
Service | Description | Default port | Default login | Links |
ArangoDB | ArangoDB multi-model database system | 30529 | admin |
Documentation: https://www.arangodb.com/documentation/ |
CouchDB (https-UI) | UI for troubleshooting | 30984 | admin |
Documentation: http://docs.couchdb.org/en/stable/ UI: OR
OR
|
Service | Description | Default port | Links |
Docker Trusted Registry | Private Docker trusted registry that stores and manages Docker images for gateway services or user applications that run on gateways | 32500 |
Documentation: https://docs.docker.com/ee/dtr/ |
Hiota Alert Manager | Enables alert management | 30443 | N/A |
Hiota Asset | Enables asset and gateway management | 30443 | N/A |
Hiota Kube Resource | Management wrapper API for Kubernetes resources for activities such as deploying software and configurations to gateways | 30443 | N/A |
Hiota Manager (gRPC server) | gRCP server for internal connections | 30999 | N/A |
Hiota Manager (REST server) | REST server for hiota-agent | 30998 | N/A |
Hiota OI Manager | Open Image Manager enables upload of software on the user interface and provides statuses. | 30800 | N/A |
OAuth-Helper | Simple OAuth handling | 30303 | N/A |
Hiota Registry | Access to core and gateway route endpoints and statuses as well as core service configurations | 30443 | N/A |
Hiota User Preferences | User preferences for notifications | 30231 | N/A |
Useful commands for installation node VM
When installing IIoT Core Services on GKE using the recommended separate installation node, the following commands are helpful:
When to use | Commands |
Run these commands from the terminal |
|
Commands to access the VM instance |
|
Commands to copy files to and from Google storage | gsutil is a Python application that lets you access Google Cloud Storage from the command line. For example, you can use gsutil for:
Useful
|
Machine learning service resource requirements
Machine learning service is a range of services that offer machine learning tools as part of cloud computing services.
You can activate Machine learning service during the IIoT Core Services installation process to get started with machine learning.
For all cluster nodes, if you elect to enable Machine learning service during the IIoT Core Services installation process, note the following resource requirements:
Requirements | Specifications |
Minimum memory and processor requirements |
|
Disk space requirements |
|
Installation worksheets
Use the following tools to assist you in the installation process.
Installation checklist
Use this installation checklist prior to the installation of the platform components and IIOT core services.
Category | Component | Status |
Installer machine or VM |
OS: RHEL 8.4 Docker: v20.10.12 Helm: v3.6.3 Kubernetes: v1.23.9 | Yes/No |
Kubernetes nodes (on premises) Number of nodes: 3 or more. |
OS: RHEL 8.4 Enable Disable firewall | Yes/No |
A valid FQDN | N/A | Yes/No |
A load balancer | N/A | Yes/No |
A private or public Docker Registry (Core-DTR) to check that the Docker login from the installer VM, the URL, and credentials work. | N/A | Yes/No |
Access the IIoT Core Services software
To download the IIoT Core Services software, go to https://support.pentaho.com and log in. The software includes the following TAR files:
- IIoT Core Services platform installation package.
- IIoT Core Services main installer script.
- (Optional) Modbus. Install the Modbus protocol after the core installation is complete.
- IIoT Core Services Docker images.
- (Optional) Machine learning service Docker images.
- (Optional) Digital Twin Beta Docker images.
- (Optional) Command Line Interface (CLI) application.
Setting up the Kubernetes cluster
IIoT Core Services can be deployed on different types of Kubernetes clusters:
IIoT Core comes with a Docker Trusted Registry (DTR) that is used to store Docker images of gateway services and user applications. See information about the Docker Trusted Registry service in Internal core services.
For memory and disk space requirements for the installation node, see Installation node requirements.
Configure a GKE cluster with user hosted GCR
You can install a Google Kubernetes Engine (GKE) cluster by creating your own hosted Google Container Registry (GCR).
As a best practice, use an installer VM that is outside the cluster but can access the cluster nodes.
To log in to the GCR repository, use the JSON file reference to get the token:
cat <read-write-json-token> | docker login -u _json_key --password-stdin <gcr-url> cat <read-write-json-token> | HELM_EXPERIMENTAL_OCI=1 helm registry login <gcr-url> -u _json_key --password-stdin
Procedure
Download the following tarballs to a node that can connect to the hosted DTR and has Docker installed:
- iiot-docker-images-release-5.1.0.tgz
- iiot-installer-release-5.1.0.tgz
- (Optional) mlservice-docker-images-1.2.0.tgz
- (Optional) aaf-docker-images-5.1.0.tgz (Digital Twin Beta)
df -h /var/lib/docker
. The partition is needed later for loading and pushing Docker images to the registry.Open the core software package by executing the following command:
tar xvf iiot-installer-release-5.1.0.tgz
A new directory is created: iiot-installer-release-5.1.0Open the Docker images TAR file:
tar xvf iiot-docker-images-5.1.0.tgz
(Optional) Open the Machine learning service images TAR file and add og move the untarred folder to iiot-installer-release-5.1.0/mlaas/images:
tar xvf mlservice-docker-images-5.1.0.tgz mv mlaas/images iiot-installer-release-5.1.0/mlaas/
(Optional) Open the Digital Twin Beta docker image file and add og move the untarred folder to iiot-installer-release-5.1.0/aaf/images:
tar xvf aaf-docker-images-5.1.0.tgz mv aaf/images iiot-installer-release-5.1.0/aaf/
Push the hiota-solutions Helm chart and corresponding solution package to the IIoT Core Services DTR.
This push only needs to be done once before starting the installation process so that the Solution Control Plane can manage hiota-solutions. Otherwise, hiota-solutions will not appear on the control plane user interface even though the IIoT Core Services is running.export HELM_EXPERIMENTAL_OCI=1 helm chart save <path to IIoT Core Services 5.1.0 image>/iiot-installer-release-5.0.0/Module-4/hiota-solutions-5.0.0.tgz <core-dtr url>/hiota-solutions:5.0.0 helm registry login <core-dtr url> -u <username> -p <password> helm chart push <core-dtr url>/hiota-solutions:5.1.0 kubectl apply -f <path to IIoT Core Services 5.1.0 image>/iiot-installer-release-5.0.0/Module-4/roles/core-services/install/files/hiota_solution_package.yaml
Obtain the hosted GCR login information (json_key file with read-write permission, for example).
Tag the Docker images and push them to the hosted registry. This push needs to be done only once.
The script can also be found in the IIoT Core Services installer script image folder: iiot-installer-release-5.1.0/tag-push-docker-images.sh.Create the GKE three-node cluster.
NoteTo run Machine learning service, five nodes are recommended.Create the installer VM on GCP with Python 3, Helm 3.6.3, and
Check if you can connect to the Kubernetes cluster with thekubectl
installed.kubectl
command. The rest of the IIoT Core Services installation process is performed on the installer VM.
Results
Configure a Kubernetes cluster on premises
You can run IIoT Core Services on your own Kubernetes cluster using your own Docker Trusted Registry (DTR).
As a best practice, use an installer VM that is outside the cluster but can still access the cluster nodes.
Procedure
Download the following tarballs to a node that can connect to the hosted DTR and has Docker installed.
- iiot-docker-images-release-5.1.0.tgz
- iiot-installer-release-5.1.0.tgz
- (Optional) mlservice-docker-images-1.2.0.tgz
- (Optional) aaf-docker-images-5.1.0.tgz (Digital Twin Beta)
df -h /var/lib/docker
. The partition is needed later for loading and pushing Docker images to the registry.Open the core software package by executing the following command:
tar xvf iiot-installer-release-5.1.0.tgz
A new directory is created: iiot-installer-release-5.1.0Open the Docker images TAR file:
tar xvf iiot-docker-images-5.1.0.tgz
(Optional) Open the Machine learning service images TAR file and add og move the untarred folder to iiot-installer-release-5.1.0/mlaas/images:
tar xvf mlservice-docker-images-1.2.0.tgz mv mlaas/images iiot-installer-release-5.1.0/mlaas/
(Optional) Open the Digital Twin Beta Docker image file and add og move the untarred folder to iiot-installer-release-5.1.0/aaf/images:
tar xvf aaf-docker-images-5.1.0.tgz mv aaf/images iiot-installer-release-5.1.0/aaf/
Obtain the hosted core DTR login information, such as username and password.
Tag the Docker images and push them to the hosted registry. This push needs to be done only once.
The script can also be found in the IIoT Core Services installer script image folder iiot-installer-release-5.1.0/tag-push-docker-images.sh.Create an on-prem three-node Kubernetes cluster with a storage plugin.
NoteTo run Machine learning service, five nodes are recommended.Verify that Python 3, Helm 3.6.3, and
kubectl
are installed on the installation node.
Results
Installing IIoT Core Services
After the software prerequisites are met and your hosted environment is set up, you can install IIoT Core Services.
Install the platform components
This section describes how to install the IIoT Core Services platform components, also referred to as Foundry, for the purpose of installing IIoT Core Services.
Before you begin
If you are reinstalling the platform components, first uninstall any current instance of the software on your system to conserve system resources. See Uninstall the platform components.
Procedure
Log in to the installation node.
Download the platform installation package to the installation node:
Foundry-Control-Plane-2.4.1.tgzOpen the platform installation package by executing the following command:
mkdir Foundry-Control-Plane-2.4.1 tar xvf Foundry-Control-Plane-2.4.1.tgz -C ./Foundry-Control-Plane-2.4.1
NoteYou must perform the installation from a directory that is at least two levels from the root level, as shown above.Navigate to the platform software directory:
cd Foundry-Control-Plane-2.4.1
Get an access token:
gcloud auth print-access-token
To install Keycloak, manually create a file called foundry-control-plane-values.yaml, which is used with the control plane installation command, with the following contents:
keycloakoperator: publicPath: /auth configuration: keycloak: enableDockerAuthentication: true instances: 3 logging: enabled: false
If you have credentials for the IIoT Core Services Docker Trusted Registry (DTR), try logging into the DTR from the cluster nodes to verify access.
Navigate to the bin directory:
cd foundry-control-plane-2.4.1/bin
NoteYou must perform the installation from a directory that is at least two levels from the root level, as in/root/foundry-control-plane-2.4.1/bin
Install the platform cluster service using the following commands:
For GKE:
./install-cluster-services.sh -r <core-dtr-url> -w service-type=NodePort -u oauth2accesstoken -p $(gcloud auth print-access-token)
For on-premises:
./install-cluster-services.sh -r <core-dtr-url> -w service-type=NodePort -u <username> -p <password>
Set the number of replicas of
Example using three replicas:istiod
,istio-ingressgateway
, andistio-egressgateway
you want to run by scaling as follows:kubectl scale deployment -n istio-system --replicas=3 istiod kubectl scale deployment -n istio-system --replicas=3 istio-ingressgateway kubectl scale deployment -n istio-system --replicas=3 istio-egressgateway
Apply Kubernetes custom resource definitions.
For GKE:
cd Foundry-Control-Plane/bin/ ./apply-crds.sh -r <gcr-url> -e -u oauth2accesstoken -p $(gcloud auth print-access-token)
For on-premises:
cd Foundry-Control-Plane/bin/ ./apply-crds.sh -r <core-dtr-fqdn>:[<port>] -e -u <username> -p <password> --insecure
Install the control plane.
For GKE:
cd Foundry-Control-Plane/bin/ ./install-control-plane.sh -r <gcr-url> -c https://<cluster-fqdn>:30443 -n hiota -u oauth2accesstoken -p $(gcloud auth print-access-token) -v <path-to>/foundry-control-plane-values.yaml --skip_cluster_url_check
For on-premises:
cd Foundry-Control-Plane/bin/ ./install-control-plane.sh -I -r <core-dtr-url> -c https://<cluster-fqdn>:30443 -n hiota -u <username> -p <password> -v <path-to>/foundry-control-plane-values.yaml
Results
Verify IIoT Core platform installation
Log in to the Solution Management UI, an administrative console for the installation, and check the platform software version.
Procedure
From the command line on the installation node, get the username and password for the Solution Management UI:
- Username:
echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.username}')
- Password:
echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.credentials[0].value}')
- Username:
Log in to the Solution Management UI using the acquired credentials:
https://<cluster-fqdn>:30443/hiota/hscp-hiota/solution-control-plane/
where <cluster-fqdn> is the location where IIoT Core Services is installed.Navigate to
and replace the JSON password as follows:- Get the JSON token and save it to the file gcr-token.json.
- Insert
_json_key
in the Username field on the Registry tab. - Open the gcr-token.json file to obtain the token.
- Copy the token to the Password field on the Registry tab by pasting the password in a single line, and click to save.
- Click the Solutions tab, then the Installed sub-tab, to verify the platfom software version.
Check for any port number conflicts for ports 30086, 30529, 30671, 30983, 30984, 30998, 31000, 32400, and 32500 by running the following command:
kubectl get service -n istio-system istio-ingressgateway
Alternatively, check one or more specific ports using the following command:kubectl get service -A | grep -e <port number A> -e <port number B>
If a record is found, edit
istio-ingressgateway
and alter the port number accordingly:kubectl edit service -n istio-system istio-ingressgateway
Results
Installing IIoT Core Services in a cluster
You can install and configure IIoT Core Services in your cluster using the following instructions.
Start IIoT Core Services installation
Before you begin
- Configure the load balancer to forward traffic to IIoT Core Services.
- If you are reinstalling IIoT Core Services, first uninstall any current instance of the software on your system to conserve system resources.
- Complete all prerequisites as described in IIoT Core Services prerequisites.
- Configure a Kubernetes cluster using one of the options provided in Setting up the Kubernetes cluster. This includes downloading the installer and Docker images and extracting them on the installer node.
Procedure
Log in to the installation node.
Navigate to iiot-installer-release-5.1.0.
Install the necessary libraries:
export PATH=$PATH:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin ln -sf /usr/bin/python3 /usr/bin/python pip3 install -r requirements.txt
Start the IIoT Core Services preinstallation and installation process by running the following script:
./iiot_install.sh
The software license terms appear.
Read through the license, pressing Enter to scroll down. At the end of the license, enter y(es) at the prompt to agree to install.
The main installer menu appears. Use this menu to Configure IIoT Core Services.
Configure IIoT Core Services
Procedure
From the main installer menu, enter 1 for Node Configuration.
From the Nodes menu, enter 1 for Add Nodes to configure the cluster.
Enter the requested information:
Select the option to Return to the Main Menu.Field Description Hostname Enter the hostname of the node being added. IP address Enter the IP address for the node. Role - On-premises: Type "master" for all nodes.
- Cloud: Type "master" for at least one node. Label the remaining nodes "worker."
Enter 2 for Load Balancer Configuration to configure the load balancer.
Enter 1 to Add / Edit Load Balancer.
Select the option to Return to the Main Menu.Field Description FQDN Enter the fully qualified domain name (FQDN) of the load balancer. Hostname Enter the hostname of the load balancer. IP address Enter the IP address of the load balancer. Enter 3 for Profile Configuration to configure user, deployment, and storage profiles and other settings.
In the Profile Configuration menu, enter 1 for User Profiles to create users.
Enter 1 to Add User(s) and specify the following information for each new user:
You must add at least one admin user and one user in another role to gain access to the IIoT Core Services user interface after installation.Field Description Username Enter a username for the new user. You can use, modify, or delete the following pre-configured users as needed:
-
Admin username: hiota
Password: Change11me
-
Technician username: hiota_read
Password: Change11me
Password Enter a password for the new user. Email Enter the email address for the new user. Role Enter the user role: admin, technician, or operator. Select the option to Return to the Main Menu.
-
Enter 2 for Service Passwords.
For each service, either press Enter to accept the password that is generated, or type your own password and then press Enter. The passwords you create must have at least eight characters, one lowercase letter, one uppercase letter, and one number. When you create your passwords, copy and store them for easy access. If you forget the generated passwords, you can find them in the Kubernetes dashboard.Select the option to Return to the Main Menu.Enter 3 for Deployment Profiles and 1 for Edit Profile.
In the Edit Profile menu, select a deployment profile.Profile Description development (default) Enter 1 to select the development environment and safely test your deployment without affecting the production environment. production Enter 2 to select the production environment. Whether you select development or production in this menu, the minimum requirements in Preparing for IIoT Core Services installation apply. Your selection affects the number of replicas and requests and limits for CPU and memory.
Select the option to Return to the Main Menu.
Enter 4 for Storage Profiles.
Enter 1 to Edit / Review PVC ReadWriteMode or keep the default setting:
For option 2, Edit / Review Service Storage Partitions, enter the storage size for each service needed for your applications.PVC ReadWriteMode Description ReadWriteOnce (default) Mount the volume as read-write by a single node.
Use this option for VMWare CNS or GCR.
ReadWriteMany Mount the volume as read-only by many nodes.
Use this option for HSPC.
Select the option to Return to the Main Menu.
Enter 5 for Cloud / On Prem & DTR Configuration.
Field Description Enable / Disable - Cloud / On Prem Install Enter true to enable on-premises install or false to enable cloud install. View / Edit Core Services Image Registry Configuration Enter the FQDN URL, port number, current username and password for the core registry. This information is required even if you select Cloud. Select the option Return to Main Menu option.
To enable RBAC, enter 6 for Optional Features, then Edit / Review RBAC.
The setting to install RBAC is disabled by default. Toggle on or off by entering y(es) or n(o).Select the Return to Main Menu option to return to the Profile Configurations menu, then again Return to Main Menu to return to the start menu.Enter 4 for Optional Installations.
The setting to install optional services is turned off by default. Toggle on or off by entering true or false for each service:Profile Description Knative Deploy, run, and manage serverless, cloud-native applications with Kubernetes. ML Service Deploy cloud-based machine learning tools. See table below for Machine learning service configuration options. For more information about Machine learning service and resource requirements, see Machine learning service resource requirements. Digital Twin Beta Activate Digital Twin mode and the ability to add digital twin objects that are installed with IIoT Core. NoteTo activate this option, you must also select the ML Service option.NoteDigital Twin Beta is currently an experimental feature in IIoT Core Services.The Machine learning service options include the following:
Select Return to Optional Installations Menu.ML Service Option Description NFS_Server Specify storage size for file sharing using a Network File System (NFS) server. The default size is 9 GB. Model_Lifecycle Enter true or false to enable or disable model lifecycle management. Model_Management Enter true or false to enable or disable model management. Model_Server Enter true or false to enable or disable a model server. Notebook Enter true or false to enable or disable Jupyter Notebook. Ray Enter true or false to enable or disable Ray for Machine learning service scaling. Enter 10 to Validate Configuration Parameters.
Return to the Main Menu and enter 10 to Exit Installer Menu.
The installer checks if all the parameters are correctly set.
Results
hiota-installation-configuration-values-secret
in the hiota
namespace.Perform core post installation tasks
You can perform the following post installation tasks to verify that the IIoT Core Services installation is successful.
Sign into the IIoT Core Services UI and verify that IIoT Core Services services are running.
Push solutions to Solution Management UI (GKE only)
For the solutions that you install with IIoT Core to display in the Solution Management UI, the corresponding Helm charts must be pushed to the GCR registry when the software is running on GKE.
Use the following template instructions
Procedure
Log into the VM installation node as root.
Save each Helm chart to the registry:
helm chart save <image> <registry>/<solution-name>:<tag>
Push each Helm chart using the following template:
helm chart push <registry>/<solution-name>:<tag>
helm chart save hiota-solutions-5.1.0 us.gcr.io/<registry>/hiota-solutions:5.1.0 helm chart push us.gcr.io/<registry>/hiota-solutions:5.1.0 helm chart save lumada-ml-model-lifecycle-0.1.0-22.tgz us.gcr.io/<registry>/lumada-ml-model-lifecycle:0.1.0-22 helm chart push us.gcr.io/<registry>/lumada-ml-model-lifecycle:0.1.0-22 helm chart save lumada-ml-model-server-0.1.0-36.tgz us.gcr.io/<registry>/lumada-ml-model-server:0.1.0-36 helm chart push us.gcr.io/<registry>/lumada-ml-model-server:0.1.0-36 helm chart save lumada-ml-model-management-1.0.0-b7.tgz us.gcr.io/<registry>/lumada-ml-model-management:1.0.0-b7 helm chart push us.gcr.io/<registry>/lumada-ml-model-management:1.0.0-b7 helm chart save lumada-ml-notebook-0.2.0-76.tgz us.gcr.io/<registry>/lumada-ml-notebook:0.2.0-76 helm chart push us.gcr.io/<registry>/lumada-ml-notebook:0.2.0-76 helm chart save mlaas-ray-0.2.0-10.tgz us.gcr.io/<registry>/mlaas-ray:0.2.0-10 helm chart push us.gcr.io/<registry>/mlaas-ray:0.2.0-10
Managing certificates in IIoT Core
Obtain a CA certificate for IIoT Core Services
IIoT Core Services uses a self-owned certificate that is not trusted by the operating system, browsers, and other applications. As a result, client services that connect to IIoT Core Services do not automatically trust its certificates. They must be configured with the IIoT Core Services certificate.
This section describes how you obtain the self-signed CA certificate that is used by the IIoT Core Services. You can add this to your operating system/browsers/applications truststore to make them to trust it.
Before you begin
The following is required to run the script:
Element | Description |
bash | Unix shell (or similar) |
curl | HTTP(S) utility |
jq | JSON string processing utility |
sed | Utility to perform basic text transformations |
Procedure
Log in to the installation node.
Navigate to the iiot-installer-release-5.1.0 directory.
Run the script with the following arguments:
appliance-ca.sh <appliance host address> <user name> [<certificate file path>]
Argument Description appliance host address The platform IP address. user name The name of the admin user authorized to access the certificate. certificate file path (Optional)
The file path where the certificate should be saved.
Enhance HTTPS security by using a hybrid certificate solution
When you access IIoT Core Services, the cluster uses self-signed certificates by default, and you may see a “Not Secure” warning message from your web browser. To get rid of this warning message, you can opt for a hybrid certificate solution and convert the self-signed certificates to public-signed certificates.
Limitations of the hybrid certificate solution:
- If you have an IIoT Gateway enrolled, the gateway will use the Hitachi Vantara self-signed CA certificate.
- If you have Kafka installed, Kafka will use the Hitachi Vantara self-signed CA certificate.
- The certificates are not renewed automatically. You need to renew them manually before they expire.
Update the cluster using a hybrid certificate solution
To update the cluster using a hybrid certificate solution and covert the self-signed certificates to public-signed certificates, perform the following steps.
Procedure
Get the following public-signed certificates from your Certificate Authority (CA).
First certificate Second certificate The first certificate has the cluster’s FQDN as the CN name. Add lumada-edge.hiota.{FQDN}
in the SAN.The second certificate has lumada- edge.hiota.{FQDN}
as the CN name.Perform the following steps after you receive the public-signed certificates from your CA.
First certificate Second certificate - Save the root CA certificate to the
ca.crt
file. - Save the private key to the
tls.key
file. - Save the signed certificate to the
tls.crt
file.
- Save the private key to the
hiota.tls.key
file. - Save the signed certificate to the
hiota.tls.crt
file.
- Save the root CA certificate to the
Copy the following files to cluster.
ca.crt
tls.key
tls.crt
hiota.tls.key
hiota.tls.crt
It is assumed that:
- Both certificates are signed by the same CA.
- Both
tls.crt
andtls.key
files must be in PEM format. If the files are not in PEM format, you need to convert them.If you get a file in the
pkcs#12
format (.pkx
), you can run the following command to convert it to PEM format and then rename it as specified above.<<# openssl pkcs12 -in filename.pfx -out cert.pem -nodes>>
Copy all the scripts from this directory to the same directory where where you copied the
ca.crt, tls.key
andtls.crt
files.Update the
istio-system
certificates by running the following script and put theFQDN
of the cluster as the parameter. For example, if the FQDN of your cluster ismy.example.fqdn
, you can run the following script:./update_istio_certificate.sh my.example.fqdn
Back up the certificates on your cluster.
NoteThis step is only needed when you update the certificate for the first time. Skip this step during the certificate renewal process.Run
backup_and_delete_certificate.sh
to back up the certificate files on the cluster and delete the old certificates managed by cert-manager.Copy all the YAML files as a back up. These files will be required during the certificate renewal process.
Update the certificates for all services by running the following script using the cluster FQDN as the parameter. For example, if your cluster FQDN is
my.example.fqdn
, you can run the following script:./update_certificate.sh my.example.fqdn
Renew certificates using a hybrid certificate solution
In the hybrid certificate solution, the public-signed certificates are not renewed automatically by cert-manager. You need to renew them manually before the expiration date.
Perform the following steps to renew the public-signed certificates.
Procedure
Copy the YAML files that were backed up when obtaining the hybrid certificates, as stated in the step 6 of Update the cluster using a hybrid certificate solution. Run
backup_adn_delete_certificate.sh
to backup the YAML files and save them into the directory you are going to update.Get the following public-signed certificates from your Certificate Authority (CA).
First certificate Second certificate The first certificate has the cluster’s FQDN as the CN name. Add lumada-edge.hiota.{FQDN}
in the SAN.The second certificate has lumada- edge.hiota.{FQDN}
as the CN name.Perform the following steps after you receive the public-signed certificates from your CA.
First certificate Second certificate - Save the root CA certificate to the
ca.crt
file. - Save the private key to the
tls.key
file. - Save the signed certificate to the
tls.crt
file.
- Save the private key to the
hiota.tls.key
file. - Save the signed certificate to the
hiota.tls.crt
file.
- Save the root CA certificate to the
Copy the following files to cluster.
ca.crt
tls.key
tls.crt
hiota.tls.key
hiota.tls.crt
It is assumed that:
- Both the certificates are signed by the same CA.
- Both
tls.crt
andtls.key
must be in PEM format. If the files are not in PEM format, they must be converted.If you get a file in the
pkcs#12
format (.pkx
), you can run the following command to convert it to PEM format and then rename it as specified above.<<# openssl pkcs12 -in filename.pfx -out cert.pem -nodes>>
Copy all the scripts from this directory to the same directory where where you copied the
ca.crt, tls.key
, andtls.crt
.Update the
istio-system
certificates by running the following script using the cluster FQDN as the parameter. For example, if the cluster FQDN ismy.example.fqdn
, you can run the following script:./update_istio_certificate.sh my.example.fqdn
Update the certificates for all services by running the following script using the cluster FQDN as the parameter. For example, if the cluster FQDN is
my.example.fqdn
, you can run the following script:./update_certificate.sh my.example.fqdn
Obtain SSL certificate
You can obtain an SSL certificate for debugging purpose. To get an SSL certificate, use the following instructions.
Function | Command |
Get the ROOT certificate |
|
Get the GW certificate (MQTT) |
|
Get the Kafka certificate |
|
Get the AMQP JKS certificate |
|
Check SSL certificate expiration
You need to check the SSL certificate expiration status to ensure that it is correctly installed, valid, trusted and doesn't give any errors. Use the following process for checking the SSL certificate expiration status.
Procedure
Retrieve the certificates list saved as a Kubernetes secret in the specified namespace. Run the following command to get the list:
kubectl get -n ${namespace} secret|grep tls
For example, to get certificate list saved as a Kubernetes secret in the hiota namespace, run the following command:
kubectl get -n hiota secret|grep tls
After you find out the secret name of the certificate from the list, run the following command to check the certificate expiration date:
kubectl -n hiota get secret $(secret name} -o=jsonpath='{.data.tls\.crt}'|base64 -d |openssl x509 -noout -dates
For example, if you want to check the influxdb certificate expiration, run the following command:
kubectl -n hiota get secret hiota-influxdb-secrets -o=jsonpath='{.data.tls\.crt}'|base64 -d |openssl x509 -noout -dates
The certificate expiration date will be displayed in the following format:
notBefore=Dec 9 23:39:21 2022 GMT
notAfter=Dec 4 23:39:21 2023 GMT
You can check whether the certificate is expired by using the notBefore and notAfter date.
Sign into the IIoT Core Services UI
You can access the IIoT Core Services UI by signing into its web-based user interface.
The first time you sign in you need the credentials provided by your administrator.
Procedure
Navigate to
where: cluster_fqdn is the fully qualified domain name (FQDN) for the cluster (usually that of the load balancer).https://lumada-edge.hiota.<cluster_fqdn>:30443
Enter Username and Password.
If you get an Access Denied message, you do not have the required permissions. Contact your administrator to set up access.
Results
Configuring Modbus
IIoT Core Services includes a Modbus adapter that can be configured for TCP communication between a IIoT Gateway and IIoT Core Services.
Install Modbus
Procedure
Navigate to the directory where the Modbus installation file is located and extract the contents of the TAR file.
tar xvf hiota-modbus-lib-5.1.0.tgz
A new directory is created: Hiota-Modbus-Lib-Installer-5.1.0.
Navigate to the new directory:
cd Hiota-Modbus-Lib-Installer-5.1.0
Use the following command to enable file executable permissions for the installer script:
chmod +x Installer.sh
Run the installer:
./installer.sh <cluster-fqdn>
Results
Configure a Modbus adapter
You can perform the following steps to configure the installed Modbus adapter.
Procedure
Enroll a IIoT Gateway on IIoT Core Services as described in Registering and provisioning IIoT Gateway using CLI.
Add a Modbus datamap to the enrolled device.
The information is stored in the subordinate Modbus server in four different tables. Two tables store on/off discrete values (coils) and two store numerical values (registers). The coils and registers each have a read-only table and read-write table.Primary tables Data type Address range Number of records Type Notes Coil 1 bit 00000-09999 10000 (0x270F) Read-Write This type of data can be changed by an application. Discrete input 1 bit 10000-19999 10000 (0x270F) Read-Only This type of data can be provided by an I/O system. Input registers 16 bits 30000-39999 10000 (0x270F) Read-Only This type of data can be provided by an I/O system. Holding registers 16 bits 40000-49999 10000 (0x270F) Read-Write This type of data can be changed by an application. From the Data Route tab in the IIoT Core Services UI, create a Modbus route.
When creating the data route, select HIOTA for the Data Type in the Data Profile section. HIOTA is the IIoT Core Services Common Data Model (CDM). For more information about CDM, see Common Data Model.
Results
Example: Configure multiple Modbus connections
With IIoT Core Services, you can configure multiple Modbus servers for each gateway. The following example shows how to configure two gateways with multiple Modbus servers.
Procedure
Enroll two gateways on IIoT Core Services as described in Install and provision an IIoT Gateway.
In the IIoT Core Services UI, select the Device tab.
Deploy the Modbus adapter on each gateway device.
Follow the instructions in Deploy protocol adapter configurations on a gateway.- Select one of the gateways to be deployed.NoteYou can only select devices that are online and in Ready state.
- Select Adapter Configuration page. to view the
- In the Adapter Type list, select Modbus.
- Paste your Modbus adapter configuration YAML into the Insert Adapter Configuration File text box.
Example:
1 - name: "custom-application-1" 2 tags: 3 - name: "velocity" 4 type: "int16" 5 slaveAddr: 0x00; 6 tableName: HoldingRegister 7 startAddr: 0x00 8 isBigEndian: true 9 - name: "temperature" 10 type: "int16" 11 slaveAddr: 0x00 12 tableName: HoldingRegister 13 startAddr: 0x01 14 isBigEndian: true 15 - name: "a_03_int8" 16 type: "int8" 17 slaveAddr: 0x00 18 tableName: HoldingRegister 19 startAddr: 0x02 20 isBigEndian: true 21 - name: "a_04_uint8" 22 type: "uint8" 23 slaveAddr: 0x00 24 tableName: HoldingRegister 25 startAddr: 0x03 26 isBigEndian: true 27 - name: "a_05_int32" 28 type: "int32" 29 slaveAddr: 0x00 30 tableName: HoldingRegister 31 startAddr: 0x04 32 isBigEndian: true 33 - name: "a_06_float" 34 type: "float" 35 slaveAddr: 0x00 36 tableName: HoldingRegister 37 startAddr: 0x06 38 isBigEndian: true
Field Description name Tag name. type Data type: bool, int8, uint8, int16, uint16, int32, uint32, int64, uint64, float, double, string. slaveAddr The Modbus device addresses a specific subordinate device by placing the 8-bit subordinate address in the address field of the message (RTU mode). The address field of the message frame contains two characters (in ASCII mode), or 8 binary bits (in RTU mode). Valid addresses are from 1-247. Subordinate address 0 is used for broadcasts. tableName Name of primary table. The value is found in {‘Coils’, ‘DiscreteInput’, ‘HoldingRegisters’, ‘InputRegisters’}. startAddr Start address for tableName. The range is 0-9999. isBigEndian Boolean. Modbus is a “big-endian” protocol, that is, the more significant byte of a 16-bit value is sent before the less significant byte. In some cases, however, 32-bit and 64-bit values are treated as being composed of 16-bit words, and transfer the words in "little-endian" order. For example, the 32-bit value 0x12345678 would be transferred as 0x56 0x78 0x12 0x34. You can chose the little-endian mode when isBigEndian is set to false. - Click Deploy.
- Select one of the gateways to be deployed.
Select the Data Route tab to create a data route for each Modbus connection.
NoteGeneral instructions for how to create a data route are found in Creating a data route.- Click Create Data Route.
The Create Data Route page opens.
- Enter a Name for the data route.
- Select the Asset that you are collecting data from.
- For Device Type, select Gateway.
- For Device, select the name of the gateway.
- Enter a Trace ID or keep the default Trace ID that is based on the asset name.
- For Data Type, select HIOTA.
- In the Data Source section, select Modbus from the Protocol field and the corresponding Hostname / IP Address and Port.
- In the Data Destinations section, add the data destination details for one or more data destinations.
- Click Save and Deploy.
- Click Create Data Route.
Results
Configuring Data Catalog integration
Data Catalog is an optional software component that can be used with IIoT Core to perform data profiling and analyze the content and quality of the data.
To use Data Catalog, contact your Hitachi Vantara sales representative and purchase a separate license.
Prerequisites for configuring Data Catalog integration
Before the Data Catalog integration can be configured, you must complete the following prerequisites:
- Determine which databases to use as data destinations in the data route. Supported databases are Postgres, MinIO, and MongoDB for both default and external databases. For default databases, choose a Lumada Default Setting database as a data destination.NoteFor MongoDB, the Lumada Default Setting is currently not supported. To use an external MongoDB, select the non-default MongoDB as a data destination.
For information on how to create data routes in IIoT Core Services, see Manage data routes.
- Install and deploy IIoT Core Services v5.1. For instructions, see Install and configure IIoT Core Services.
- Install and deploy Data Catalog v7.3. Refer to the Data Catalog user documentation.
- On the host where IIoT Core Services is installed, verify that you have access to the <IIoT Core Services installation location>/ldc directory, which contains integration-related configuration and a setup script.
- In addition to alphanumeric characters, only spaces, hyphens, and underscores are supported in column names. Data Catalog jobs will fail if a column name contains any other special characters.
Set database certificates in Data Catalog
If the databases that store IIoT Core data use publicly signed certificates or if no certificates are required for access, there is no need to update the Data Catalog deployment, as long as the CA certificates are in the Java trusted store where Data Catalog is running.
If you are using self-signed certificates or private CA-signed certificates for access, you need to export any certificates to Data Catalog as described in the following instructions.
Procedure
Export the self-signed certificate of the corresponding database and save it in PEM format. Multiple certificates can be chained as single text value.
Use the following command to extract the certificate from a MinIO database in IIoT Core:openssl s_client -showcerts -connect <FQDN>:<port> </dev/null 2>/dev/null | openssl x509 -outform PEM
Variable Setting FQDN For an internal database: the fully qualified domain name (FQDN) for the cluster.
For an external database: the FQDN of the database server host.
port For an internal database: the default port number in the databases table at Message brokers, databases, and external-facing services.
For an external database: the database server port.
Navigate to the path where the custom_values.yaml file is located in the Data Catalog deployment and add the remote server certificate obtained in the previous step.
Example:agent: extraCerts: |+ -----BEGIN CERTIFICATE----- MIIDqDCCApCgAwIBAgIEYYVSOTANBgkqhkiG9w0BAQsFADBbMScwJQYDVQQDDB5SZWdlcnkgU2Vs **************************************************************************** ********************************cut***************************************** **************************************************************************** 27Su+O458c91NiUcATpaTgHEnYcbh8dhHhZVwg== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Z2NwY2RuLmd2dDEuY29tggoqLmd2dDIuY29tgg4qLmdjcC5ndnQyLmNvbYIQKi51 **************************************************************** ********************************cut***************************** **************************************************************** cQNSKiNbm5XLjx5Rcgz1PG55uW1yDMLj8lE9+8wr -----END CERTIFICATE----- app-server: extraCerts: |+ -----BEGIN CERTIFICATE----- MIIDqDCCApCgAwIBAgIEYYVSOTANBgkqhkiG9w0BAQsFADBbMScwJQYDVQQDDB5SZWdlcnkgU2Vs **************************************************************************** ********************************cut***************************************** **************************************************************************** 27Su+O458c91NiUcATpaTgHEnYcbh8dhHhZVwg== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Z2NwY2RuLmd2dDEuY29tggoqLmd2dDIuY29tgg4qLmdjcC5ndnQyLmNvbYIQKi51 **************************************************************** ********************************cut***************************** **************************************************************** cQNSKiNbm5XLjx5Rcgz1PG55uW1yDMLj8lE9+8wr -----END CERTIFICATE-----
Update the Data Catalog deployment so it can communicate with the IIoT Core database by running the following command:
helm upgrade ldc7 -n <ldc namespace> ldc.7.x.x.tgz -f <path>/custom_values.yaml --version="xxx"
Verify Data Catalog user role permissions
Define a user role in Data Catalog with the permissions API Access and Manage Business Glossary.
Procedure
In the Data Catalog Keycloak client, your Data Catalog service user or site administrator must assign a role to a user and record the username and password.
Log in to Data Catalog using the credentials shared by your Data Catalog service user or site administrator.
Navigate to
and select the following permissions:- Manage Business Glossary
- API Access
Enable Data Catalog integration
After you configure the Data Catalog integration and verify Data Catalog user role permissions, you can activate the Data Catalog integration with IIoT Core Services.
Procedure
Log in to IIoT Core Services as root.
On the host where IIoT Core Services is installed, go to the <IIoT Core Services installer location>/ldc directory.
Gather the required Data Catalog information from the README.txt file, then run the following script to enable the integration:
bash ldc-setup.sh <ldc-cluster-fqdn-or-ip-address>
The script will configure the proper Data Catalog settings in IIoT Core Services. Use the Data Catalog username and password that is mapped to the role with both API Access and Manage Business Glossary permissions set as described in Verify Data Catalog user role permissions.
View IIoT Core data in Data Catalog
When the Data Catalog integration is enabled, you can view IIoT Core asset and data route information in Data Catalog.
Any updates to this data in IIoT Core are updated in real time in Data Catalog.
The assets in IIoT Core are automatically imported into Data Catalog as business terms with the format <asset name>_<first 8 digits of asset id>. This includes any asset hierarchies.
Whenever an asset is created, updated, or deleted in IIoT Core, the corresponding Data Catalog business term is created, updated, or deleted.
The following is an example of IIoT Core asset information as it appears in Data Catalog under the Business Glossary:
When a data route is created, updated, or deleted in IIoT Core and the data destination is set to Postgres, MinIO, or MongoDB, the corresponding database is created, updated, or deleted in Data Catalog. In Data Catalog, the database is referred to as a data source. This includes both default databases and custom databases.
The following is an example of IIoT Core data route information as it appears in Data Catalog under Data Sources:
In Data Catalog, you can perform a variety of analytics operations on IIoT Core data sources.
For example, you can run scanning, profiling, and discovery jobs by submitting job templates or sequences against the synchronized IIoT Core data sources.
For information about Data Catalog features and capabilities, see the Data Catalog user documentation.
Install Kafka
You can install Kafka v2.13-3.1.0 with IIoT Core Services 5.1 as an optional component.
Before you begin
- For any previous installation of IIoT Core Services that included Kafka, verify that Kafka and Zookeeper resources are completely uninstalled before executing this procedure by running the following commands:
kubectl -n kafka get kafkausers kubectl -n kafka get kafkatopics kubectl -n kafka get kafkaclusters kubectl -n kafka get cruisecontroloperations
- Complete the installation of the IIoT Core platform components and IIoT Core Services.
Procedure
Log in as root user on the installation node.
Navigate to the IIoT Core Services installation directory where the Kafka installation script is located:
cd <iiot-core-installer-dir>
Run the Kafka installer:
./iiot_kafka_install.sh
(Optional) Verify that Kafka has been properly installed by running the following commands:
kubectl -n kafka get pods kubectl -n zookeeper get pods
Verify that the Kafka pods (kafka cluster, cruise control, kafka-operator) and the Zookeeper pods (zookeeper, zookeeper-operator) are running successfully.
Results
Shut down a Kubernetes node in a three-node cluster
Procedure
Shut down and drain the cluster node by running the following command:
kubectl drain <nodename> --ignore-daemonsets --delete-local-data
NoteDraining safely evicts or deletes all pods except mirror pods to make them unschedulable in preparation for maintenance. The drain function waits for graceful termination. Do not operate on the node until the command completes.Wait for the drain to complete.
If a pod is stuck in Init state, even after waiting eight minutes, delete the pod and wait a few minutes while the pod comes online. Run the following command to delete the pod:
kubectl delete pod -n <namespace> <podname>
If a pod is stuck in the Terminating state after a node is drained, run the following command to force delete the pod.
Run the following command:
kubectl delete pod -n <namespace> <podname> --grace-period=0 --force
Confirm that all pods are running on the other two nodes by running the following command:
kubectl get pods --all-namespaces -o wide
Wait eight minutes for a pod to be up and running after it has been rescheduled to another node.
Verify volume attachments and ensure that there are no volume attachments left on the drained node by running the following command:
kubectl get volumeattachments -o custom-columns=VANAME:metadata.name,NODENAME:spec.nodeName
Sample output:
VANAME
NODENAME
csi-0232c4c79c3205b45c15eb2a60e61878df9ef6e546a8d98c7fc2c49619c2af7d NodeB
csi-3fe0b6b87271201ad9b4f065a49894ac3ee5c8ed67f17ad2766177d58d5092d7 NodeC
csi-4bb22d7f2fcf9f59faba8560cbc37384127bcab09f381dda8ea65f31675a34b7 NodeB
csi-6a02010d32147f167126f16b1baf8f56fff447df29b6446820cb443fb42199af NodeA
If you still see volume attachments with NODENAME = NodeA (the drained node), delete the volumeattachment with the following command:
kubectl delete volumeattachments csi-xxxx
Repeat step 5 to delete all volume attachments left in the node you drained.
Verify that there is nothing in the multipath directory by running the following command on NodeA (the drained node):
multipath -ll
Restart NodeA (the drained node) to put it back in service using the following command.
kubectl uncordon <nodename>
Upgrade IIoT Core Services from 5.0 to 5.1
Use the following procedures to upgrade an existing installation of IIoT Core Services from v5.0 to v5.1.
The upgrade will move database assets without changing anything or without performing any migration.
Prepare cluster nodes for 2.4.1 platform components upgrade
You must complete the following steps for each node in the Kubernetes cluster so that you can upgrade IIoT Core Services platform components to v2.4.1. IIoT Core Services platform components are also referred to as Foundry.
Before you begin
Procedure
Log in as a root user to a Kubernetes node in the cluster that is being upgraded.
Create a kubeconfig file for the Kubernetes node by running the following commands:
sed -i -e 's/localhost/<kubernetes_node_ip>/' ~/.kube/config kubectl get nodes -o wide
Copy the Core-DTR CA into the truststore of the Kubernetes node operating system by running the following commands:
cp /etc/docker/certs.d/<load-balancer-address>\:32400/ca.crt /etc/pki/ca-trust/source/anchors/<load-balancer-address>-32400-ca.crt update-ca-trust trust list | grep hitachi
Next steps
Prepare installer VM for 2.4.1 platform components upgrade
Prepare the installer VM so that you can upgrade IIoT Core Services platform components to v2.4.1.
Before you begin
Procedure
Prepare passwordless SSH login from the installer VM to all Kubernetes nodes by running the following commands on each node:
ssh-keygen ssh-copy-id root@<kubernetes_node_ip>
Download the kubeconfig file that you modified for the upgrade from a node that contains the file by using the following commands:
scp root@<kubernetes_node_ip>:~/.kube/config ~/.kube/config kubectl get nodes -o wide helm list -A
NoteFor information about modifying the kubeconfig file, see Prepare cluster nodes for 2.4.1 platform components upgrade.Download the Core-DTR CA cert to the installer VM by using the following commands:
mkdir -p /etc/docker/certs.d/<FQDN>\:32400/ scp -r root@<kubernetes_node_ip>:/etc/docker/certs.d/<FQDN>\:32400/ /etc/docker/certs.d/
Test the Docker connection to the Core-DTR by logging in to the Core-DTR with the following command:
docker login <FQDN>:32400 -u <user name> -p <password>
Next steps
Prepare to upgrade platform components to 2.4.1 in on-premises cluster
Prepare to upgrade IIoT Core Services platform components to v2.4.1 by downloading the required files, verifying versions of required applications, and running commands to perform various tasks.
Before you begin
- The installer VM has the required software described in the Installation VM prerequisites section of IIoT Core Services prerequisites
- IIoT Core Services platform components v2.3 is installed on the cluster.
- IIoT Core Services v5.0 is installed.
Procedure
Log in to the installer VM as a root user.
On a node in the cluster, verify the version of
istioctl
by running the following command:istioctl version
Verify the version of the following images by running the following commands:
kubectl get sa -A | grep istio
kubectl describe deployment admin-app -n hiota | grep Image
kubectl describe deployment keycloakoperator -n hiota | grep Image
kubectl describe deployment istiod -n istio-system | grep Image
kubectl describe deployment cert-manager -n cert-manager| grep Image
Verify that the installer VM is pointing to the cluster that is being upgraded by running the following command:
kubectl get nodes -o wide
NoteIf you do not see the correct nodes, the installer VM is not correctly configured to point to the cluster. For instructions on configuring the installer VM, see Prepare installer VM for 2.4.1 platform components upgrade.Go to https://support.pentaho.com, download the Foundry-Control-Plane-2.4.1.tgz file and save the file to the directory where you want to install the upgrade.
NoteYou must perform the installation from a directory that is at least two levels from the root level.Untar the Foundry-Control-Plane-2.4.1.tgz file by running the following command:
mkdir Foundry-Control-Plane-2.4.1 tar xvfz Foundry-Control-Plane-2.4.1.tgz -C ./Foundry-Control-Plane-2.4.1
To reduce upgrade time, remove
istio-injection: enabled
from the hiota namespace by running the following command:kubectl edit ns -n hiota
NoteIf you do not removeistio-injection: enabled
from the hiota namespace, the upgrade might take a very long time and can result in a timeout error. The long upgrade time and timeout error happens because all -sso-gatekeeper pods are restarted when you run the upgrade command,./upgrade-cluster-services.sh
.Upgrade cluster services by running the following command with a root user name and password:
$ ./upgrade-cluster-services.sh -w service-type=NodePort -u <user name> -p <password>
NoteUpgrading cluster services might take a long time.After the upgrade completes, add
istio-injection: enabled
back to the hiota namespace by running the following command:kubectl edit ns -n hiota
Apply the Kubernetes custom resource definitions by running the following command:
./apply-crds.sh -r <FQDN>:32400 -u <user name> -p <password> --insecure
Upload the new control plane charts and images by running the following command:
./upload-solutions.sh -C /<filepath>/Foundry-Control-Plane-2.4.1/charts/ -I /<filepath>/Foundry-Control-Plane-2.4.1/images/ -n hiota
Patch imagePullSecrets service account in the istiod-system namespace by running the following command:
kubectl patch sa istiod -n istio-system -p '{"imagePullSecrets":[{"name":"istio-regcred"}]}'
Obtain the IIoT Core Services platform components user name and password by running the following commands:
echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.username}') echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.credentials[0].value}')
Results
Next steps
Prepare to upgrade platform components to 2.4.1 in GKE cluster
Prepare to upgrade IIoT Core Services platform components to v2.4.1 by downloading the required files, verifying versions of required applications, and running commands to perform various tasks.
Before you begin
- The installer VM has the required software described in the Installation VM prerequisites section of IIoT Core Services prerequisites
- IIoT Core Services platform components v2.3 is installed on the cluster.
- IIoT Core Services v5.0 is installed.
Procedure
Log in to the installer VM as a root user.
Verify that the installer VM is pointing to the cluster that is being upgraded by running the following command:
kubectl get nodes -o wide
NoteIf you do not see the correct nodes, the installer VM is not correctly configured to point to the cluster. For instructions on configuring the installer VM, see Prepare installer VM for 2.4.1 platform components upgrade.Go to https://support.pentaho.com, download the Foundry-Control-Plane-2.4.1.tgz file and save the file to the directory where you want to install the upgrade.
NoteYou must perform the installation from a directory that is at least two levels from the root level.Untar the Foundry-Control-Plane-2.4.1.tgz file by running the following command:
mkdir Foundry-Control-Plane-2.4.1 tar xvfz Foundry-Control-Plane-2.4.1.tgz -C ./Foundry-Control-Plane-2.4.1
Obtain the access token for the GKE cluster by running the following command:
gcloud auth print-access-token
To reduce upgrade time, remove
istio-injection: enabled
from the hiota namespace by running the following command:kubectl edit ns -n hiota
NoteIf you do not removeistio-injection: enabled
from the hiota namespace, the upgrade might take a very long time and can result in a timeout error. The long upgrade time and timeout error happens because all -sso-gatekeeper pods are restarted when you run the upgrade command,./upgrade-cluster-services.sh
.Upgrade cluster services by running the following command with the
oauth2accesstoken
user name andgcloud auth print-access-token
password specified:./upgrade-cluster-services.sh -w service-type=NodePort -u oauth2accesstoken -p '$(gcloud auth print-access-token)'
NoteUpgrading cluster services might take a long time.After the upgrade completes, add
istio-injection: enabled
back to the hiota namespace by running the following command:kubectl edit ns -n hiota
Apply the Kubernetes custom resource definitions by running the following command:
./apply-crds.sh -r <FQDN>:32400 -u <user name> -p <password> --insecure
Upload the new control plane charts and images by running the following command:
./upload-solutions.sh -C /<filepath>/Foundry-Control-Plane-2.4.1/charts/ -I /<filepath>/Foundry-Control-Plane-2.4.1/images/ -n hiota
Patch imagePullSecrets service account in the istiod-system namespace by running the following command:
kubectl patch sa istiod -n istio-system -p '{"imagePullSecrets":[{"name":"istio-regcred"}]}'
Results
Next steps
Upgrade platform components to 2.4.1
Upgrade IIoT Core Services platform components to v2.4.1 so that you can upgrade IIoT Core Services from v5.0 to v5.1.
Before you begin
Procedure
Go to the Solution Management site at https://<FQDN>:30443/hiota/hscp-hiota/solution-control-plane/, and log in by using the IIoT Core Services platform components user name and password.
In the Solution Management window, click the Solution Control Plane box.
The solution control plane window opens.In the Installed tab of the solution control plane window, click the action menu icon
The Upgrade Solution window opens., and then click Upgrade.
In the Upgrade Solution window, click the Upgrade Version list, select 2.4.1, and then click Confirm.
Click Upgrade.
NoteThe upgrade takes approximately 20 minutes to complete. If there are errors during the upgrade, you can roll back to the previous version by clicking the action menu iconand selecting a previous version.
After the upgrade completes, verify that the 2.4.1 upgrade was applied by navigating back to the Solution Management site at https://<FQDN>:30443/hiota/hscp-hiota/solution-control-plane/.
In the Solution Management window, the checkmark on the Solution Control Plane tile turns green, indicating that the 2.4.1 upgrade was applied. The release version updates to 2.4.1. You are now ready to upgrade IIoT Core Services from v5.0 to v5.1.
Next steps
Upgrade IIoT Core Services from 5.0 to 5.1
Before you begin
- Back up the configuration.yaml file.
- Verify that backups are available for the databases.
- Download the latest version of the IIoT Core Services v5.1.0 installer and Docker images from https://support.pentaho.com.
- Verify that the size of the backup data matches the size of the original data.
Procedure
Verify that the IIoT Core Services image that you want to use for the v5.1 upgrade is installed on the cluster by navigating to <iiot_installation_directory>/iiot-installer-release-5.1.0/ and running the following command:
kubectl get secret -n hiota hiota-installation-configuration-values-secret -o=jsonpath="{.data.configuration\.yaml}" | base64 --decode > configuration.yaml.bak
Log in to the installation node as a root user.
Untar the IIoT Core Services installer and Docker images by running the following commands:
$ tar -xvf iiot-installer-release-5.1.0.tgz $ tar -xvf iiot-docker-images-release-5.1.0.tgz
Navigate to the new directory for the IIoT Core Services installer and Docker images by running the following command:
cd iiot-installer-release-5.1.0
Run the following commands to install the required software libraries:
export PATH=$PATH:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
ln -sf /usr/bin/python3 /usr/bin/python
pip3 install -r requirements.txt
To avoid certification-related issues, run the following pre-update script:
./iiot_pre_update.sh <FQDN>
Start the IIoT Core Services v5.1.0 update procedure by running the following command:
./iiot_update.sh
Manually upgrade Kafka by running the following command:
./iiot_kafka_upgrade.sh
Results
Next steps
- Verify that the IIoT Core Services version is v5.1.
- Verify that the assets, routes, gateway, and devices appear correctly in the UI.
Uninstall IIoT Core Services
Before you begin
Procedure
Log in as a root user on the installation node.
Navigate to the product installation directory:
cd iiot-installer-release-5.1.0
Run the uninstall script:
./iiot_uninstall.sh
Select Y(es) to uninstall or N(o) to cancel.
The uninstall script completes.Delete the folder
iiot-installer-release-5.1.0
.
Uninstall the platform components
After you have uninstalled IIoT Core Services, you can uninstall the IIoT Core platform components.
Procedure
Log in as a root user on the installation node.
Navigate to the product installation directory:
cd <installation_dir>/Foundry-Control-Plane-2.4.1/bin
Run the uninstall scripts:
./uninstall-control-plane.sh -n hiota ./uninstall-control-plane.sh -F
Select Y(es) to uninstall or N(o) to cancel.
The uninstall script completes.