Install and configure IIoT Core Services
This chapter describes important preparatory steps before installing IIoT Core Services, the installation process itself, and the necessary post-installation tasks.
Preparing for IIoT Core Services installation
Before you begin installing IIoT Core Services, you should review all of the information in this chapter, gather the required information, and complete any necessary installations.
Installation node requirements
As a best practice, install IIoT Core Services from a small VM node outside the cluster but within the same network environment.
The following minimum requirements apply to the VM installation node.
Hardware | Specifications |
CPU | Intel Atom or equivalent processor, 4 cores |
Memory | 16 GB |
Disk space | 500 GB |
IIoT Core Services supports the following operating systems on the installation node:
Software | Version |
Red Hat Enterprise Linux (RHEL) | 8.4 |
IIoT Core Services system requirements
IIoT Core Services is designed to be installed in a cluster with a minimum of three nodes.
The following table lists the minimum requirements for each of the cluster nodes (without Machine learning service):
Hardware | Specifications |
Number of nodes | 3 |
CPU |
16 vCore CPU per node Example: 2 Intel Xeon Scalable E5-2600 v5 or equivalent AMD processors, 64-bit, 8 cores |
Memory | 16 GB per node |
Disk space | 512 GB per node |
For IIoT Core with Machine learning service, the following minimum requirements apply:
Hardware | Specifications |
Number of nodes | 5 |
CPU |
32 vCore CPU per node Example: 2 Intel Xeon Silver 4110 CPU 8 cores @2.10 Ghz,16 threads, or high performance CPU |
Memory | 128 GB per node |
Disk space |
2 TB per node Minimum PVC size for the whole cluster: 4 TB |
IIoT Core Services supports the following operating system for cluster nodes:
Software | Version |
Red Hat Enterprise Linux (RHEL) | 8.4 |
Installation prerequisites
Observe the following prerequisites before installing IIoT Core Services. These are global prerequisites that apply to all IIoT Core Services components, including the platform component.
IIoT Core Services prerequisites
To install IIoT Core Services, complete the following prerequisites.
- Install and configure Kubernetes in a three-node cluster (five nodes recommended for Machine learning service). See the Kubernetes Documentation for more information.
- Use the command option
-n hiota
when installing the IIoT Core Services platform components. The platform components must be installed in the same hiota namespace where IIoT Core Services is installed. - You must have a FQDN for the cluster.
Observe the following specific prerequisites before installing the IIoT Core Services components.
Component | Requirement |
Kubernetes |
A secured Kubernetes system (v1.21.6) with a kubeconfig file for API server access. In addition to installing Kubernetes, optionally install and configure a Kubernetes dashboard. |
Default storage class |
To maximize solution portability, the Kubernetes system must declare a To verify that your Kubernetes cluster declares a |
Storage plugin |
Install a storage plugin with the following specifications: Googles Kubernetes Engine (GKE) Storage class: GKE standard Follow the instructions for creating a Kubernetes cluster using GKE in the Google documentation on the Kubernetes engine. On-premises Based on what best suits your hardware environment, choose one of the following options. Both options have been tested with IIoT Core Services 5.0.0:
|
Load balancer |
Set up a load balancer to forward requests to the Kubernetes cluster node for the following ports:
|
Registry requirements | See Registry requirements and Example of how to set up a Docker registry. |
nfs-utils |
Only applies to on-premises installations of IIoT Core Services when ML Services is selected as an optional installation: Either install a |
Databases | The following database versions are supported with the current version of IIoT Core:
|
Registry requirements
IIoT Core Services requires a registry for container images that is OCI-compliant and has an SSL certificate.
When deploying both cluster services and the control plane, specify the fully qualified domain name (FQDN) of this registry, either using the -r
argument or the installer configuration file. The value you specify needs to include both the host and port for your registry. For example:
-r myregistry.example.com:6000
If your registry is available on port 443, you don't need to specify the port number.
If you are using a registry that supports multitenancy, you also need to include the specific location within the registry that you want to use. For example, if you are using Harbor, include the name of the Harbor project you want to use:
-r myharbor.example.com:6000/my_project
If the registry you are using is insecure (that is, it has a self-signed or otherwise not trusted SSL certificate), you must configure your Docker daemon on the installation node to allow the insecure registry.
This configuration is often done by adding the registry to the insecure-registries
section of /etc/docker/daemon.json, and restarting the Docker service. Configure the container runtime on the cluster to allow the insecure registry. Specify the -I
flag for install-control-plane.sh
and for install-cluster-services.sh
.
As a best practice, use a trusted, CA-signed certificate.
For information on setting up a non-production registry that meets requirements, see the following example.
Example of how to set up a Docker registry
This section walks you through the process of setting up a non-production, insecure registry that IIoT Core Services can use with Docker Registry.
A non-production (development-only) environment requires an OCI-compliant registry that uses HTTPS. To authenticate with the registry, use a username and password, not an auth plugin or credential helper.
Before you begin
The following must be set up before running the procedure:
- Docker
- OpenSSL command line
Procedure
Generate a self-signed OpenSSL certificate.
mkdir -p certs openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ -x509 -days 365 -out certs/domain.crt -subj "/CN=$(hostname -f)"
Start the Docker registry on port 5000 by passing a self-signed Open SSL certificate to the Docker registry.
docker run -d -v "$(pwd)"/certs:/certs \ -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -p 5000:5000 \ --name registry \ --restart unless-stopped \ registry:2
NoteFor additional options, see https://docs.docker.com/registry/deploying/.Ensure that the registry is included in the list of insecure registries in your container runtime.
Make sure to include<registry_hostname>:5000
ininsecure-registries
in /etc/docker/daemon.json.Create a /etc/docker/daemon.json file, if you don't already have one.
In this file, add
<registry_hostname>:5000
to a list of insecure registries:{ "insecure-registries" : [ "<registry_hostname>:5000" ] }
Restart Docker for the configuration changes to take effect:
systemctl restart docker
Run
For example:docker info
and verify that the list ofInsecure Registries
is correct.Client: Debug Mode: false Server: Containers: 150 Running: 67 Paused: 0 Stopped: 83 Images: 217 Server Version: 19.03.5-ce ... Insecure Registries: <registry_hostname>:5000 127.0.0.0/8 Live Restore Enabled: false
Test the registry by pulling, tagging, and pushing an image:
docker pull ubuntu docker image tag ubuntu $(hostname -f):5000/my-ubuntu docker push $(hostname -f):5000/my-ubuntu
If you see output similar to the following, your registry is working correctly:
Using default tag: latest latest: Pulling from library/ubuntu 423ae2b273f4: Pull complete de83a2304fa1: Pull complete f9a83bce3af0: Pull complete b6b53be908de: Pull complete Digest: sha256:04d48df82c938587820d7b6006f5071dbbffceb7ca01d2814f81857c631d44df Status: Downloaded newer image for ubuntu:latest docker.io/library/ubuntu:latest The push refers to repository [<registry_hostname>:5000/my-ubuntu] 1852b2300972: Pushed 03c9b9f537a4: Pushed 8c98131d2d1d: Pushed cc4590d6a718: Pushed latest: digest: sha256:0925d086715714114c1988f7c947db94064fd385e171a63c07730f1fa014e6f9 size: 1152
You can also list the contents of the registry using the following commands:
$ curl https://$(hostname -f):5000/v2/_catalog -k {"repositories":["my-ubuntu"]}
Results
Next steps
To remove this registry, run the following command:
docker stop registry; docker rm registry
Port configuration requirements (Core Services)
To use IIoT Core Services, you must provide access to the ports used by the system services and databases.
Message brokers, databases, and external-facing services
IIoT Core Services uses a combination of message brokers and RESTful services. Message brokers establish communication between applications and infrastructure for message queues and topics.
Check the following message brokers, databases, and other services to determine if the corresponding ports need to be open to run your IIoT Core Services. The needed ports must be open both on the load balancer and on each node.
Service | Description | Default port | Optional install | Default login | Links |
AMQP - RabbitMQ | Messaging over AMQP | 30671 | No | admin |
Documentation: https://www.rabbitmq.com/documentation.html |
Service | Description | Default port | Optional install | Default login | Links |
CouchDB | Unstructured data database access | 30084 | No | admin |
Documentation: http://docs.couchdb.org/en/stable/ UI: OR
OR
|
InfluxDB | Time-series data (historical data) database access | 30086 | No | admin |
Documentation: https://docs.influxdata.com/influxdb |
MinIO | Object storage database access | 31000 | No | admin |
Documentation: https://docs.min.io/docs/ UI: OR
OR
|
Service | Description | Default port | Optional install | Default login | Links |
Hiota Ingress REST | HTTPS data ingestion to IIoT Core | 30443 | No | N/A |
N/A |
Hiota Passport |
CouchDB Passport API access to Couch data | 30224 | No | N/A | N/A |
InfluxDB Time-series data (historical data) access | 30223 | No | N/A | N/A | |
PostgreSQL Structured data access | 30228 | No | N/A | N/A | |
Hiota Product APIs | Access to management plane APIs | 30443 | No | N/A | Documentation: See Management plane REST API |
OAuth-Helper | Simple OAuth handling | 30303 | No | N/A |
N/A |
Service | Description | Default port | Optional install | Default login | Links |
Spark | Kubernetes Operator for Apache Spark | N/A | Yes | N/A |
Documentation: https://spark.apache.org/docs/latest/ |
Internal core services
Because the following ports are used by internal IIoT Core Services applications, verify that these ports are open to external access for the assigned IIoT Core Services to work properly.
Service | Description | Default port | Links |
Kafka | Kafka messaging support | 30090, 30091, 30092 |
Documentation: https://kafka.apache.org/intro |
RabbitMQ (https-UI) | UI for troubleshooting | 31671 |
Documentation: https://www.rabbitmq.com/documentation.html |
MQTT - RabbitMQ | Messaging over MQTT for gateway devices | 30884 |
Documentation: https://www.rabbitmq.com/documentation.html |
Service | Description | Default port | Default login | Links |
ArangoDB | ArangoDB multi-model database system | 30529 | admin |
Documentation: https://www.arangodb.com/documentation/ |
CouchDB (https-UI) | UI for troubleshooting | 30984 | admin |
Documentation: http://docs.couchdb.org/en/stable/ UI: OR
OR
|
Service | Description | Default port | Links |
Docker Trusted Registry | Private Docker trusted registry that stores and manages Docker images for gateway services or user applications that run on gateways | 32500 |
Documentation: https://docs.docker.com/ee/dtr/ |
Hiota Alert Manager | Enables alert management | 30443 | N/A |
Hiota Asset | Enables asset and gateway management | 30443 | N/A |
Hiota Kube Resource | Management wrapper API for Kubernetes resources for activities such as deploying software and configurations to gateways | 30443 | N/A |
Hiota Manager (gRPC server) | gRCP server for internal connections | 30999 | N/A |
Hiota Manager (REST server) | REST server for hiota-agent | 30998 | N/A |
Hiota OI Manager | Open Image Manager enables upload of software on the user interface and provides statuses. | 30800 | N/A |
OAuth-Helper | Simple OAuth handling | 30303 | N/A |
Hiota Registry | Access to core and gateway route endpoints and statuses as well as core service configurations | 30443 | N/A |
Hiota User Preferences | User preferences for notifications | 30231 | N/A |
Machine learning service resource requirements
Machine learning service is a range of services that offer machine-learning tools as part of cloud computing services.
You can activate Machine learning service during the IIoT Core Services installation process to get started with machine learning.
For all cluster nodes, if you elect to enable Machine learning service during the IIoT Core Services installation process, note the following resource requirements:
Requirements | Specifications |
Minimum memory and processor requirements |
|
Disk space requirements |
|
Access the IIoT Core Services software
To download the IIoT Core Services software, go to https://support.pentaho.com and log in. The software includes the following TAR files:
- IIoT Core Services platform installation package.
- IIoT Core Services main installer script.
- (Optional) Modbus. Install the Modbus protocol after the core installation is complete.
- IIoT Core Services Docker images.
- (Optional) Machine learning service Docker images.
- (Optional) Digital Twin Beta Docker images.
- (Optional) Command Line Interface (CLI) application.
Setting up the Kubernetes cluster
IIoT Core Services can be deployed on different types of Kubernetes clusters:
IIoT Core comes with a Docker Trusted Registry (DTR) that is used to store Docker images of gateway services and user applications. See information about the Docker Trusted Registry service in Internal core services.
For memory and disk space requirements for the installation node, see Installation node requirements.
Configure a GKE cluster with user hosted GCR
You can install a Google Kubernetes Engine (GKE) cluster by creating your own hosted Google Container Registry (GCR).
As a best practice, use an installer VM that is outside the cluster but can access the cluster nodes.
To log in to the GCR repository, use the JSON file reference to get the token:
cat <read-write-json-token> | docker login -u _json_key --password-stdin <gcr-url> cat <read-write-json-token> | HELM_EXPERIMENTAL_OCI=1 helm registry login <gcr-url> -u _json_key --password-stdin
Procedure
Download the following tarballs to a node that can connect to the hosted DTR and has Docker installed:
- iiot-docker-images-release-5.0.0.tgz
- iiot-installer-release-5.0.0.tgz
- (Optional) mlservice-docker-images-1.0.0.tgz
- (Optional) aaf-docker-images-5.0.0.tgz (Digital Twin Beta)
df -h /var/lib/docker
. The partition is needed later for loading and pushing Docker images to the registry.Open the core software package by executing the following command:
tar xvf iiot-installer-release-5.0.0.tgz
A new directory is created: iiot-installer-release-5.0.0Open the Docker images TAR file:
tar xvf iiot-docker-images-5.0.0.tgz
(Optional) Open the Machine learning service images TAR file and add og move the untarred folder to iiot-installer-release-5.0.0/mlaas/images:
tar xvf mlservice-docker-images-5.0.0.tgz mv mlaas/images iiot-installer-release-5.0.0/mlaas/
(Optional) Open the Digital Twin Beta docker image file and add og move the untarred folder to iiot-installer-release-5.0.0/aaf/images:
tar xvf aaf-docker-images-5.0.0.tgz mv aaf/images iiot-installer-release-5.0.0/aaf/
Push the hiota-solutions Helm chart and corresponding solution package to the IIoT Core Services DTR.
This push only needs to be done once before starting the installation process so that the Solution Control Plane can manage hiota-solutions. Otherwise, hiota-solutions will not appear on the control plane user interface even though the IIoT Core Services is running.export HELM_EXPERIMENTAL_OCI=1 helm chart save <path to IIoT Core Services 5.0.0 image>/iiot-installer-release-5.0.0/Module-4/hiota-solutions-5.0.0.tgz <core-dtr url>/hiota-solutions:5.0.0 helm registry login <core-dtr url> -u <username> -p <password> helm chart push <core-dtr url>/hiota-solutions:5.0.0 kubectl apply -f <path to IIoT Core Services 5.0.0 image>/iiot-installer-release-5.0.0/Module-4/roles/core-services/install/files/hiota_solution_package.yaml
Obtain the hosted GCR login information (json_key file with read-write permission, for example).
Tag the Docker images and push them to the hosted registry. This push needs to be done only once.
The script can also be found in the IIoT Core Services installer script image folder: iiot-installer-release-5.0.0/tag-push-docker-images.sh.Create the GKE three-node cluster.
NoteTo run Machine learning service, five nodes are recommended.Create the installer VM on GCP with Python 3, Helm 3.4.x, and
Check if you can connect to the Kubernetes cluster with thekubectl
installed.kubectl
command. The rest of the IIoT Core Services installation process is performed on the installer VM.
Results
Configure a Kubernetes cluster on premises
You can run IIoT Core Services on your own Kubernetes cluster using your own Docker Trusted Registry (DTR).
As a best practice, use an installer VM that is outside the cluster but can still access the cluster nodes.
Procedure
Download the following tarballs to a node that can connect to the hosted DTR and has Docker installed.
- iiot-docker-images-release-5.0.0.tgz
- iiot-installer-release-5.0.0.tgz
- (Optional) mlservice-docker-images-1.0.0.tgz
- (Optional) aaf-docker-images-5.0.0.tgz (Digital Twin Beta)
df -h /var/lib/docker
. The partition is needed later for loading and pushing Docker images to the registry.Open the core software package by executing the following command:
tar xvf iiot-installer-release-5.0.0.tgz
A new directory is created: iiot-installer-release-5.0.0Open the Docker images TAR file:
tar xvf iiot-docker-images-5.0.0.tgz
(Optional) Open the Machine learning service images TAR file and add og move the untarred folder to iiot-installer-release-5.0.0/mlaas/images:
tar xvf mlservice-docker-images-5.0.0.tgz mv mlaas/images iiot-installer-release-5.0.0/mlaas/
(Optional) Open the Digital Twin Beta Docker image file and add og move the untarred folder to iiot-installer-release-5.0.0/aaf/images:
tar xvf aaf-docker-images-5.0.0.tgz mv aaf/images iiot-installer-release-5.0.0/aaf/
Obtain the hosted core DTR login information, such as user name and password.
Tag the Docker images and push them to the hosted registry. This push needs to be done only once.
The script can also be found in the IIoT Core Services installer script image folder iiot-installer-release-5.0.0/tag-push-docker-images.sh.Create an on-prem three-node Kubernetes cluster with a storage plugin.
NoteTo run Machine learning service, five nodes are recommended.Verify that Python 3, Helm 3.4.x, and
kubectl
are installed on the installation node.
Results
Installing IIoT Core Services
After the software prerequisites are met and your hosted environment is set up, you can install IIoT Core Services.
Install the platform components
This section describes how to install the IIoT Core Services platform components, also referred to as Foundry, for the purpose of installing IIoT Core Services.
Before you begin
If you are reinstalling the platform components, first uninstall any current instance of the software on your system to conserve system resources. See Uninstall the platform components.
Procedure
Log in to the installation node.
Download the platform installation package to the installation node:
Foundry-Control-Plane-2.3.0.tgzOpen the platform installation package by executing the following command:
mkdir Foundry-Control-Plane-2.3.0 tar xvf Foundry-Control-Plane-2.3.0.tgz -C ./Foundry-Control-Plane-2.3.0
NoteYou must perform the installation from a directory that is at least two levels from the root level, as shown above.Navigate to the platform software directory:
cd Foundry-Control-Plane-2.3.0
Get an access token:
gcloud auth print-access-token
To install Keycloak, manually create a file called foundry-control-plane-values.yaml, which is used with the control plane installation command, with the following contents:
keycloakoperator: publicPath: /auth configuration: keycloak: enableDockerAuthentication: true instances: 3 logging: enabled: false
If you have credentials for the IIoT Core Services Docker Trusted Registry (DTR), try logging into the DTR from the cluster nodes to verify access.
Navigate to the bin directory:
cd foundry-control-plane-2.3.0/bin
NoteYou must perform the installation from a directory that is at least two levels from the root level, as in/root/foundry-control-plane-2.3.0/bin
Install the platform cluster service using the following commands:
For GKE:
./install-cluster-services.sh -r <core-dtr-url> -w service-type=NodePort -u oauth2accesstoken -p $(gcloud auth print-access-token)
For on-premises:
./install-cluster-services.sh -r <core-dtr-url> -w service-type=NodePort -u <username> -p <password>
Set the number of replicas of
Example using three replicas:istiod
,istio-ingressgateway
, andistio-egressgateway
you want to run by scaling as follows:kubectl scale deployment -n istio-system --replicas=3 istiod kubectl scale deployment -n istio-system --replicas=3 istio-ingressgateway kubectl scale deployment -n istio-system --replicas=3 istio-egressgateway
Apply Kubernetes custom resource definitions.
For GKE:
cd Foundry-Control-Plane/bin/ ./apply-crds.sh -r <gcr-url> -e -u oauth2accesstoken -p $(gcloud auth print-access-token)
For on-premises:
cd Foundry-Control-Plane/bin/ ./apply-crds.sh -r <core-dtr-fqdn>:[<port>] -e -u <username> -p <password> --insecure
Install the control plane.
For GKE:
cd Foundry-Control-Plane/bin/ ./install-control-plane.sh -r <gcr-url> -c https://<cluster-fqdn>:30443 -n hiota -u oauth2accesstoken -p $(gcloud auth print-access-token) -v <path-to>/foundry-control-plane-values.yaml --skip_cluster_url_check
For on-premises:
cd Foundry-Control-Plane/bin/ ./install-control-plane.sh -I -r <core-dtr-url> -c https://<cluster-fqdn>:30443 -n hiota -u <username> -p <password> -v <path-to>/foundry-control-plane-values.yaml
Results
Verify IIoT Core platform installation
Log in to the Solution Management UI, an administrative console for the installation, and check the platform software version.
Procedure
From the command line on the installation node, get the username and password for the Solution Management UI:
- Username:
echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.username}')
- Password:
echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.credentials[0].value}')
- Username:
Log into the Solution Management UI using the acquired credentials:
https://<cluster-fqdn>:30443/hiota/hscp-hiota/solution-control-plane/
where <cluster-fqdn> is the location where IIoT Core Services is installed.Navigate to
and replace the JSON password as follows:- Get the JSON token and save it to the file gcr-token.json.
- Insert
_json_key
in the Username field on the Registry tab. - Open the gcr-token.json file to obtain the token.
- Copy the token to the Password field on the Registry tab by pasting the password in a single line, and click to save.
- Click the Solutions tab, then the Installed sub-tab, to verify the platfom software version.
Results
Installing IIoT Core Services in a cluster
You can install and configure IIoT Core Services in your cluster using the following instructions.
Start IIoT Core Services installation
Before you begin
- Configure the load balancer to forward traffic to IIoT Core Services.
- If you are reinstalling IIoT Core Services, first uninstall any current instance of the software on your system to conserve system resources.
- Complete all prerequisites as described in IIoT Core Services prerequisites.
- Configure a Kubernetes cluster using one of the options provided in Setting up the Kubernetes cluster. This includes downloading the installer and Docker images and extracting them on the installer node.
Procedure
Log in to the installation node.
Navigate to iiot-installer-release-5.0.0.
Install the necessary libraries:
export PATH=$PATH:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin ln -sf /usr/bin/python3 /usr/bin/python pip3 install -r requirements.txt
Start the IIoT Core Services preinstallation and installation process by running the following script:
./lei_install.sh
The software license terms appear.
Read through the license, pressing Enter to scroll down. At the end of the license, enter y(es) at the prompt to agree to install.
The main installer menu appears. Use this menu to Configure IIoT Core Services.
Configure IIoT Core Services
Procedure
From the main installer menu, enter 1 for Node Configuration.
From the Nodes menu, enter 1 for Add Nodes to configure the cluster.
Enter the requested information:
Select the option to Return to the Main Menu.Field Description Hostname Enter the hostname of the node being added. IP address Enter the IP address for the node. Role - On-premises: Type "master" for all nodes.
- Cloud: Type "master" for at least one node. Label the remaining nodes "worker."
Enter 2 for Load Balancer Configuration to configure the load balancer.
Enter 1 to Add / Edit Load Balancer.
Select the option to Return to the Main Menu.Field Description FQDN Enter the fully qualified domain name (FQDN) of the load balancer. Hostname Enter the hostname of the load balancer. IP address Enter the IP address of the load balancer. Enter 3 for Profile Configuration to configure user, deployment, and storage profiles and other settings.
In the Profile Configuration menu, enter 1 for User Profiles to create users.
Enter 1 to Add User(s) and specify the following information for each new user:
You must add at least one admin user and one user in another role to gain access to the IIoT Core Services user interface after installation.Select the option to Return to the Main Menu.Field Description Username Enter a username for the new user. You can use, modify, or delete the following pre-configured users as needed:
-
Admin username: hiota
Password: Change11me
-
Technician username: hiota_read
Password: Change11me
Password Enter a password for the new user. Email Enter the email address for the new user. Role Enter the user role: admin, technician, or operator. -
Enter 2 for Service Passwords.
For each service, either press Enter to accept the password that is generated, or type your own password and then press Enter. The passwords you create must have at least eight characters, one lowercase letter, one uppercase letter, and one number. When you create your passwords, copy and store them for easy access. If you forget the generated passwords, you can find them in the Kubernetes dashboard.Select the option to Return to the Main Menu.Enter 3 for Deployment Profiles and 1 for Edit Profile.
In the Edit Profile menu, select a deployment profile.Profile Description development (default) Enter 1 to select the development environment and safely test your deployment without affecting the production environment. production Enter 2 to select the production environment. Whether you select development or production in this menu, the minimum requirements in Preparing for IIoT Core Services installation apply. Your selection affects the number of replicas and requests and limits for CPU and memory.
Select the option to Return to the Main Menu.Enter 4 for Storage Profiles.
Enter 1 to Edit / Review PVC ReadWriteMode or keep the default setting:
For option 2, Edit / Review Service Storage Partitions, enter the storage size for each service needed for your applications.Select the option to Return to the Main Menu.PVC ReadWriteMode Description ReadWriteOnce (default) Mount the volume as read-write by a single node.
Use this option for VMWare CNS or GCR.
ReadWriteMany Mount the volume as read-only by many nodes.
Use this option for HSPC.
Enter 5 for Cloud / On Prem & DTR Configuration.
Select the option Return to Main Menu option.Field Description Enable / Disable - Cloud / On Prem Install Enter true to enable on-premises install or false to enable cloud install. View / Edit Core Services Image Registry Configuration Enter the FQDN URL, port number, current username and password for the core registry. This information is required even if you select Cloud. To enable RBAC, enter 6 for Optional Features, then Edit / Review RBAC.
The setting to install RBAC is disabled by default. Toggle on or off by entering y(es) or n(o).Select the Return to Main Menu option to return to the Profile Configurations menu, then again Return to Main Menu to return to the start menu.Enter 4 for Optional Installations.
The setting to install optional services is turned off by default. Toggle on or off by entering true or false for each service:
The Machine learning service options include the following:Profile Description Spark Add the Spark analytics engine capability for large-scale data processing. Knative Deploy, run, and manage serverless, cloud-native applications with Kubernetes. ML Service Deploy cloud-based machine learning tools. See table below for Machine learning service configuration options. For more information about Machine learning service and resource requirements, see Machine learning service resource requirements. Digital Twin Beta Activate Digital Twin mode and the ability to add digital twin objects that are installed with IIoT Core. NoteTo activate this option, you must also select the ML Service option.NoteDigital Twin Beta is currently an experimental feature in IIoT Core Services.
Select Return to Optional Installations Menu.ML Service Option Description NFS_Server Specify storage size for file sharing using a Network File System (NFS) server. The default size is 9 GB. Model_Lifecycle Enter true or false to enable or disable model lifecycle management. Model_Management Enter true or false to enable or disable model management. Model_Server Enter true or false to enable or disable a model server. Notebook Enter true or false to enable or disable Jupyter Notebook. Ray Enter true or false to enable or disable Ray for Machine learning service scaling. Enter 10 to Validate Configuration Parameters.
Return to the Main Menu and enter 10 to Exit Installer Menu.
The installer checks if all the parameters are correctly set.
Results
hiota-installation-configuration-values-secret
in the hiota
namespace.Perform core post installation tasks
You can perform the following post installation tasks to verify that the IIoT Core Services installation is successful.
Sign into the IIoT Core Services UI and verify that IIoT Core Services services are running.
Obtain a CA certificate for IIoT Core Services
IIoT Core Services (also referred to as the appliance) uses a self-owned certificate that is trusted by the operating system, browsers, and other applications. As a result, client services that connect to IIoT Core Services do not automatically trust its certificates. They must be configured with the IIoT Core Services certificate.
Before you begin
The following is required to run the script:
Element | Description |
bash | Unix shell (or similar) |
curl | HTTP(S) utility |
jq | JSON string processing utility |
sed | Utility to perform basic text transformations |
Procedure
Log in to the installation node.
Navigate to the iiot-installer-release-5.0.0 directory.
Run the script with the following arguments:
appliance-ca.sh <appliance host address> <user name> [<certificate file path>]
Argument Description appliance host address The platform IP address. user name The name of the admin user authorized to access the certificate. certificate file path (Optional)
The file path where the certificate should be saved.
Sign into the IIoT Core Services UI
You can access the IIoT Core Services UI (also called Edge Manager) by signing into its web-based user interface.
The first time you sign in you need the credentials provided by your administrator.
Procedure
Navigate to
where: cluster_fqdn is the fully qualified domain name (FQDN) for the cluster (usually that of the load balancer).https://lumada-edge.hiota.<cluster_fqdn>:30443
Enter Username and Password.
If you get an Access Denied message, you do not have the required permissions. Contact your administrator to set up access.
Results
Configuring Modbus
IIoT Core Services includes a Modbus adapter that can be configured for TCP communication between a IIoT Gateway and IIoT Core Services.
Install Modbus
Procedure
Navigate to the directory where the Modbus installation file is located and extract the contents of the TAR file.
tar xvf hiota-modbus-lib-5.0.0.tgz
A new directory is created: Hiota-Modbus-Lib-Installer-5.0.0.
Navigate to the new directory:
cd Hiota-Modbus-Lib-Installer-5.0.0
Use the following command to enable file executable permissions for the installer script:
chmod +x Installer.sh
Run the installer:
./installer.sh <cluster-fqdn>
Restart the following containers on the IIoT Gateway:
systemctl restart hiota docker restart mqtt hiota-phoenix hiota-gateway-store-forward modbus-lib modbus-adapter
Results
Configure a Modbus adapter
You can perform the following steps to configure the installed Modbus adapter.
Procedure
Enroll a IIoT Gateway on IIoT Core Services as described in Registering and provisioning an IIoT Gateway using CLI.
Add a Modbus datamap to the enrolled device.
The information is stored in the subordinate Modbus server in four different tables. Two tables store on/off discrete values (coils) and two store numerical values (registers). The coils and registers each have a read-only table and read-write table.Primary tables Data type Address range Number of records Type Notes Coil 1 bit 00000-09999 10000 (0x270F) Read-Write This type of data can be changed by an application. Discrete input 1 bit 10000-19999 10000 (0x270F) Read-Only This type of data can be provided by an I/O system. Input registers 16 bits 30000-39999 10000 (0x270F) Read-Only This type of data can be provided by an I/O system. Holding registers 16 bits 40000-49999 10000 (0x270F) Read-Write This type of data can be changed by an application. From the Data Route tab in the IIoT Core Services UI, create a Modbus route.
When creating the data route, select HIOTA for the Data Type in the Data Profile section. HIOTA is the IIoT Core Services Common Data Model (CDM). For more information about CDM, see GUID-76D1641E-E676-47B4-A750-EDFFD13BF745.
Results
Example: Configure multiple Modbus connections
With IIoT Core Services, you can configure multiple Modbus servers for each gateway. The following example shows how to configure two gateways with multiple Modbus servers.
Procedure
Enroll two gateways on IIoT Core Services as described in Install and provision an IIoT Gateway.
In the IIoT Core Services UI, select the Device tab.
Deploy the Modbus adapter on each gateway device.
Follow the instructions in Deploy protocol adapter configurations on a gateway.- Select one of the gateways to be deployed.NoteYou can only select devices that are online and in Ready state.
- Select Adapter Configuration page. to view the
- In the Adapter Type list, select Modbus.
- Paste your Modbus adapter configuration YAML into the Insert Adapter Configuration File text box.
Example:
1 - name: "custom-application-1" 2 tags: 3 - name: "velocity" 4 type: "int16" 5 slaveAddr: 0x00; 6 tableName: HoldingRegister 7 startAddr: 0x00 8 isBigEndian: true 9 - name: "temperature" 10 type: "int16" 11 slaveAddr: 0x00 12 tableName: HoldingRegister 13 startAddr: 0x01 14 isBigEndian: true 15 - name: "a_03_int8" 16 type: "int8" 17 slaveAddr: 0x00 18 tableName: HoldingRegister 19 startAddr: 0x02 20 isBigEndian: true 21 - name: "a_04_uint8" 22 type: "uint8" 23 slaveAddr: 0x00 24 tableName: HoldingRegister 25 startAddr: 0x03 26 isBigEndian: true 27 - name: "a_05_int32" 28 type: "int32" 29 slaveAddr: 0x00 30 tableName: HoldingRegister 31 startAddr: 0x04 32 isBigEndian: true 33 - name: "a_06_float" 34 type: "float" 35 slaveAddr: 0x00 36 tableName: HoldingRegister 37 startAddr: 0x06 38 isBigEndian: true
Field Description name Tag name. type Data type: bool, int8, uint8, int16, uint16, int32, uint32, int64, uint64, float, double, string. slaveAddr The Modbus device addresses a specific subordinate device by placing the 8-bit subordinate address in the address field of the message (RTU mode). The address field of the message frame contains two characters (in ASCII mode), or 8 binary bits (in RTU mode). Valid addresses are from 1-247. Subordinate address 0 is used for broadcasts. tableName Name of primary table. The value is found in {‘Coils’, ‘DiscreteInput’, ‘HoldingRegisters’, ‘InputRegisters’}. startAddr Start address for tableName. The range is 0-9999. isBigEndian Boolean. Modbus is a “big-endian” protocol, that is, the more significant byte of a 16-bit value is sent before the less significant byte. In some cases, however, 32-bit and 64-bit values are treated as being composed of 16-bit words, and transfer the words in "little-endian" order. For example, the 32-bit value 0x12345678 would be transferred as 0x56 0x78 0x12 0x34. You can chose the little-endian mode when isBigEndian is set to false. - Click Deploy.
- Select one of the gateways to be deployed.
Select the Data Route tab to create a data route for each Modbus connection.
NoteGeneral instructions for how to create a data route are found in Creating a data route.- Click Create Data Route.
The Create Data Route page opens.
- Enter a Name for the data route.
- Select the Asset that you are collecting data from.
- For Device Type, select Gateway.
- For Device, select the name of the gateway.
- Enter a Trace ID or keep the default Trace ID that is based on the asset name.
- For Data Type, select HIOTA.
- In the Data Source section, select Modbus from the Protocol field and the corresponding Hostname / IP Address and Port.
- In the Data Destinations section, add the data destination details for one or more data destinations.
- Click Save and Deploy.
- Click Create Data Route.
Results
Shut down a Kubernetes node in a three-node cluster
Procedure
Shut down and drain the cluster node by running the following command:
kubectl drain <nodename> --ignore-daemonsets --delete-local-data
NoteDraining safely evicts or deletes all pods except mirror pods to make them unschedulable in preparation for maintenance. The drain function waits for graceful termination. Do not operate on the node until the command completes.Wait for the drain to complete.
If a pod is stuck in Init state, even after waiting eight minutes, delete the pod and wait a few minutes while the pod comes online. Run the following command to delete the pod:
kubectl delete pod -n <namespace> <podname>
If a pod is stuck in the Terminating state after a node is drained, run the following command to force delete the pod.
Run the following command:
kubectl delete pod -n <namespace> <podname> --grace-period=0 --force
Confirm that all pods are running on the other two nodes by running the following command:
kubectl get pods --all-namespaces -o wide
Wait eight minutes for a pod to be up and running after it has been rescheduled to another node.
Verify volume attachments and ensure that there are no volume attachments left on the drained node by running the following command:
kubectl get volumeattachments -o custom-columns=VANAME:metadata.name,NODENAME:spec.nodeName
Sample output:
VANAME
NODENAME
csi-0232c4c79c3205b45c15eb2a60e61878df9ef6e546a8d98c7fc2c49619c2af7d NodeB
csi-3fe0b6b87271201ad9b4f065a49894ac3ee5c8ed67f17ad2766177d58d5092d7 NodeC
csi-4bb22d7f2fcf9f59faba8560cbc37384127bcab09f381dda8ea65f31675a34b7 NodeB
csi-6a02010d32147f167126f16b1baf8f56fff447df29b6446820cb443fb42199af NodeA
If you still see volume attachments with NODENAME = NodeA (the drained node), delete the volumeattachment with the following command:
kubectl delete volumeattachments csi-xxxx
Repeat step 5 to delete all volume attachments left in the node you drained.
Verify that there is nothing in the multipath directory by running the following command on NodeA (the drained node):
multipath -ll
Restart NodeA (the drained node) to put it back in service using the following command.
kubectl uncordon <nodename>
Uninstall IIoT Core Services
Before you begin
Procedure
Log in as a root user on the installation node.
Navigate to the product installation directory:
cd iiot-installer-release-5.0.0
Run the uninstall script:
./lei_uninstall.sh
Select Y(es) to uninstall or N(o) to cancel.
The uninstall script completes.Delete the folder
iiot-installer-release-5.0.0
.
Uninstall the platform components
After you have uninstalled IIoT Core Services, you can uninstall the IIoT Core platform components.
Procedure
Log in as a root user on the installation node.
Navigate to the product installation directory:
cd <installation_dir>/Foundry-Control-Plane-2.3.0/bin
Run the uninstall scripts:
./uninstall-control-plane.sh -n hiota ./uninstall-control-plane.sh -F
Select Y(es) to uninstall or N(o) to cancel.
The uninstall script completes.