Skip to main content
Hitachi Vantara Lumada and Pentaho Documentation

Use machine learning service with IIoT Core

The Machine learning service is a component of Solution Management that enables you to train specific ML models by accessing historic data and routing datasets through it to influence responses.

The Machine learning service can optionally call third-party simulation engines, sending payloads and receiving back the result, synchronously. This creates the ability to augment existing data with inferred values that can’t be easily measured in the physical world, such as angular acceleration, torque, and so on.

Solution Management is where you train, track, manage, deploy, and monitor ML models.

Learn more

For help creating projects and building, retraining, deploying, and deleting ML models, contact your Hitachi Vantara representative.

Log into the Solution Management UI

To access the ML Model Manager application, log into the Solution Management UI.

Procedure

  1. From the command line on the installation node, get the username and password for the Solution Management UI:

    • Username:
      echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.username}')
    • Password:
      echo $(kubectl get keycloakusers -n hiota keycloak-user -o jsonpath='{.spec.user.credentials[0].value}')
  2. Log into the Solution Management UI using the acquired credentials:

    https://<cluster-fqdn>:30443/hiota/hscp-hiota/solution-control-plane/
    where <cluster-fqdn> is the location where IIoT Core Services is installed.

Results

The Solution Management console opens.

Use Model Management

Use the following actions to manage your Machine learning service models:

View Machine learning service projects

Perform the following steps to view your projects:

Procedure

  1. Open the Lumada ML Model Manager application.

  2. Select the Projects menu option at the top of the page.

Results

The Projects page displays a list of available projects with information about each one.
FieldDescription
NameName of the ML service project
StatusStatus of the ML service project
  • Draft

    The ML service project has no associated models with the status of Published.

  • Published

    The ML service project has at least one associated model with the status of Published.

TagsKeywords that describe the ML service project
DescriptionPurpose of the ML service project
CreatedDate the ML service project was created
Created ByUser who created the ML service project
ModifiedDate the ML service project was modified
Modified ByUser who modified the ML service project

View project details

You can view project details such as properties and deployment on the Projects page using the following steps:

Procedure

  1. Open the Lumada ML Model Manager application.

  2. Select Projects from the menu at the top of the page.

  3. Select the project for which you want to view the details.

  4. Select View Details in the Actions menu.

    The PROJECT PROPERTIES section contains information on the following fields:
    FieldDescription
    DescriptionPurpose of the project
    StatusStatus of the project
    • Draft

      The project has no associated models with the status of Published.

    • Published

      The project has at least one associated model with the status of Published.

    TagsKeywords that describe the project
    CreatedDate the project was created
    Created ByUser who created the project
    ModifiedDate the project was modified
    Modified ByUser who modified the project

    The DEPLOYMENT section contains information on the following fields of the ML model deployed to the project:

    NoteYou may have to scroll down the Project details page to view the DEPLOYMENT section.
    FieldDescription
    NameName of the deployment
    EndpointModel endpoint where the inferencing applications can integrate with the REST endpoint
    StatusStatus of the project. The values are:
    • Pending

      Waiting for resources

    • Deploying

      Deployment is in progress

    • Deployed

      In production

    • Failed

      The deployment request failed

    • Not Found

      Deployment was not found

    • Timeout

      The configured timeout was reached

    ASCCategory name of the analytic. For example, failure prediction
    ModelName of the machine learning model
    VersionVersion of the model
    Inferences/last hourTime when last inferencing occurred.
    Total InferencesTotal number of inferences occurred.
    Start TimeTime that the version’s deployment was requested.
    Average Elapsed TimeAverage time of the inferences.

View models

Perform the following steps to view the models in your project:

Procedure

  1. Open the Lumada ML Model Manager application.

  2. Click Model Repository from the menu at the top of the page.

  3. Select the model for which you want the details.

    NoteYou can also view the list of models by selecting View Models from the Actions menu next to the project on which you want to view the details.

    The MODELS LIST displays with the following fields:

    FieldDescription
    NameName of the machine learning model.
    Status

    Status of the ML model.

    • Draft

      A model that has been created but does not have a version. It is an empty model.

    • Ready

      A model that has at least one version with trained status.

    • Published

      A model that has at least one version with deployed status.

    ProjectThe project to which the model belongs.
    TagsKeywords that describe the ML model.
    ASCCategory name of the analytic. For example, Failure Prediction.
    CreatedDate the ML model was created.
    Created ByUser who created the ML model.
    ModifiedDate the ML model was modified.
    Modified ByUser who modified the ML model.

View model repository

Perform the following steps to view the model repository:

Procedure

  1. Open the Lumada ML Model Manager application.

  2. Select Model Repository from the menu at the top of the page.

    The list of models in the project will display with the following fields:

    FieldDescription
    NameName of the ML model
    StatusStatus of the ML model
    • Draft

      A model has been created but has no version. It is an empty model.

    • Ready

      A model has at least one version with trained status.

    • Published

      A model has at least one version with Published status.

    ProjectProject to which this ML model belongs.
    TagsKeywords that describe the ML models.
    ASCCategory name of the analytics. For example, Failure Prediction.
    CreatedDate the ML model was created.
    Created ByUser who created the ML model.
    ModifiedDate the ML model was modified.
    Modified ByUser who modified the ML model.

View model details

Perform the following steps to view the properties of a model:

Procedure

  1. Open the Lumada ML Model Manager application.

  2. Select Model Repository from the menu at the top of the page.

  3. Select View Details from the Actions menu next to the model on which you want to view the details.

    The following properties of the model you selected are displayed:

    FieldDescription
    DescriptionDescription of the ML model. A counter displays the number of characters in the description and the maximum permitted.
    StatusStatus of the ML model
    • Draft

      A model has been created but has no version. It is an empty model.

    • Ready

      A model has at least one version with trained status.

    • Published

      A model has at least one version with Published status.

    TagsKeywords that describe the ML models.
    CreatedDate the ML model was created.
    Created ByUser who created the ML model.
    ModifiedDate the ML model was modified.
    Modified ByUser who modified the ML model.
    ProjectProject to which this ML model belongs.
    ASCCategory name of the analytics. For example, Failure Prediction.

View model versions

Perform the followings steps to view the versions of the machine learning models:

Procedure

  1. Open the Lumada ML Model Manager application.

  2. Select Model Repository from the menu at the top of the page.

  3. Select View Versions from the Actions menu next to the model on which you want to view the details.

    The following properties of the model versions are displayed:

    FieldDescription
    NameVersion name
    StatusStatus of each version. The status values are:
    • Trained

      A version has been created or trained but has not been deployed.

    • Deployment pending

      Model is In the process of being deployed, pending resource availability.

    • Deploying

      Deployment is in progress.

    • Deployed

      The version is deployed.

    DatasetsThe datasets used for the model training. For example, the dataset name and location from where the dataset is retrieved. This field will vary depending on how the ML model is built.
    MetricsPerformance metrics of the version. This field will vary depending on how the ML model is built.
    ParametersThe parameters of the model version. This field will vary depending on how the ML model is built.
    TrainingThe training duration of each version.

Compare model versions

Perform the following steps to compare two versions of a model:

Procedure

  1. Open the Lumada ML Model Manager application.

  2. Select Model Repository from the menu at the top of the page.

  3. Select your project from the Filter by project menu.

  4. Select View Versions in the Actions menu.

  5. Select the check box for the two versions that you want to compare and click Compare.

Results

A summary of the datasets, parameters, and metrics is displayed. A metrics graphic comparison is also displayed.

View model performance

Machine learning projects can have multiple models, and models can have multiple versions. Lumada ML Model Manager provides a way to view the performance of each of your models. To view a model’s performance, click the Projects tab, and select the project containing the model you want to investigate. On the Details tab of the Projects page, click the More actions menu of the model you want to investigate, and choose View inference. On the Select Class menu, select the class you want to view. The statistics for your model are displayed on the Inference tab.

NoteThe classes, the displayed KPIs, and the threshold range values are defined by the model owner. When the data distributions begin to deviate significantly and the threshold range for a performance metric reaches the warning or critical rating, it may be time to retrain the model(s). Contact your Hitachi Vantara representative.

This tab contains a collection of widgets, KPIs (key performance indicators), and charts that display the metrics collected from the inferenced and subject matter expert review of the data. The data on the tab is divided into these sections:

  • KPI
  • Metrics
  • Classifications
  • Trends
  • Summary

You can view the sections by scrolling the page or by clicking the label for that section.

KPI

The KPI section provides the logged metrics in numeric form. Standard KPIs displayed are:

  • Precision is the average of the predicted positive counts versus the actual positive counts.
  • Recall is an average of the actual positive counts versus the number the model successfully identified.
  • F1 score is the weighted average of Precision and Recall.
Metrics

The Metrics section provides a visualization of these metrics and the true positive, true negative and false positive results. When you hover over the bar graphs, the threshold levels of the quality are displayed. The color codes legend and example KPI threshold values are shown in the following table:

Threshold rangesRating Color
Values greater than 89.20%GoodGreen
Values between 89.20 - 79.20%WarningOrange
Values less than 79.20%CriticalRed
Classifications

The Classifications section contains the Confusion Matrix, which maps the distribution of the predicted performance versus the ground-truth for the model. The total number of observations and the industry-standard matrix breakdown is displayed as shown in the following table:

Predicted
ActualTrue positiveFalse negative
False positiveTrue negative
Trends

In the Trends section of the page, historical data about the performance results are graphed on a timeline for each metric, where inference data is shown in blue, and the training data is in purple. You can rollover each graph to see the metric’s inference percentage for all metrics along the selected timeline, and the model training thresholds for the F1 score, Precision, and Recall metrics.

Summary

The Summary section contains summary information on the version and the metrics.