Administering the DataOps Suite
As the LDOS administrator, you can manage the solutions provided by the Lumada DataOps Suite in the Solution Management window. This article details the administrator workflow from login to advanced tasks.
Logging in as an admin
Open the Lumada DataOps Suite and log in with your assigned admin credentials. If you do not have these credentials, see the IT administrator who installed and configured your system or your Hitachi Vantara Customer Success representative.
After you log in, the Home page opens. Click the App Switcher icon in the upper-right of the window, and select Solution management from the Applications menu that appears.
The Solution management window is only available to users assigned admin permissions.
Navigating Solution management
Under Solutions in the navigation pane, select Installed. The canvas displays the cards for your installed solutions.
In addition to cards for your main DataOps components, such as Lumada Data Catalog and Lumada Dataflow Studio, you can also view cards for the components that control your DataOps displays and backend services. You can view the configuration and monitor the health of all your solutions from this single dashboard.
Under Solutions in the navigation pane, select Available. In the canvas, browse the list of deployed solutions available for install or upgrade. To find a specific solution to install or upgrade, click a column header to sort the column information, and then click the solution name to install or upgrade it.
Lumada DataOps Suite is shipped with the following installed solutions:
Control Plane
Controls the user interface for LDOS, including branding (logos, labels,etc.), navigation, the App Switcher, notifications, and user preferences.
Dataflow Engine
Manages the ETL engine for dataflows in a cluster environment.
Dataflow Engine Broker
Connects the engine to Dataflow Studio.
Data Transformation Editor
Use to lightly edit PDI client transformation (.ktr) and job (.kjb) files.
Dataflow Importer
Imports transformation (.ktr) and job (.kjb) files from Data Transformation Editor and the PDI client as dataflows into Dataflow Studio.
Dataflow Studio
Manages dataflows, including executing, monitoring, and scheduling activities.
Lumada Data Catalog
Application that automates discovery, classification, and mangement of your organization's data.
Messaging Service
Configures the messaging service for communication.
Object Storage Service
Storage server shipped with LDOS to support Data Catalog objects.
ImportantThis server is not supported in production use. As a best practice, your organization should point to where you are securely storing data. Ask your IT administrator or Hitachi Vantara Customer Success representative for more information.Solution Control Plane
Controls the user interface and services of the Solution management application. In addtion, the Solution Conrol Plane provides access to the Identity and Access Management console and to logging information.
Viewing solution cards
Each card on the canvas provides information about the solution or service it performs in the Lumada DataOps Suite. Click a card to view its details.
Each card features a unique solution name that was created at the time of installation. Additionally, each card contains the version number of the solution, its current health status, and a brief description. Drill-down into the following tabs for more detailed information:
Status
View the overall health of the solution or service, which is a combination of both installation and resource health. Resource health is derived from the health of individual resources. See Monitoring solution health for more information.
Resources
View a list of your running, pending, and successful resource executions.
Applications
Provides links to the applications comprising the solution. These links may connect to services or to informational resources, such as API reference guides.
Configuration
Validate or edit the values.yaml file for the configuration of the selected solution or service.
NoteDo not change the value of the parameters in thevalues.yaml
file for the Solution Control Plane. For guidance, consult your IT administrator or your Hitachi Vantara Customer Success respresentative.History
Lists the installed versions and any changes or upgrades that have been made to the initial configuration of a solution. For example, you can view the curently deployed version and all the previously installed solutions which are now superseded by the currently deployed version. You can roll back an upgraded or revised solution or service to a previous version, provided that all software dependencies have been satisfied. See Rollback a revision.
Install solutions
Procedure
In the Solution management window, select Available in the navigation pane.
Select the solution you want to install from the list of available solutions and review the configuration options for that solution. Compatible solutions are indicated with a green checkmark.
(Optional) In the Compatibility section, click Show to view the compatibility details of the selected solution.
In the Configuration section, edit the following fields:
In the Solution name field, accept the default name or enter a unique name for this instance of the solution.
Solution names can be up to 53 characters long and may only contain lowercase letters, numbers, periods and hyphens.In the YAML file, edit the configuration parameters per your requirements.
Click Install.
The solution is installed and a success message appears.Click Close.
Results
Upgrade solutions
Perform the following steps to upgrade an existing solution. This task assumes you are assigned admin permissions and are logged in to the Solution management window.
Procedure
In the Solution management window, select Installed in the navigation pane, and then select the solution you want to update.
Click the More actions icon and then select Upgrade from the menu.
The Upgrade Solution dialog box appears.In the Upgrade Solution dialog box, select the version update for your solution and click Upgrade.
The Configuration tab displays.Make any required configuration changes, and then click Upgrade.
A progress message box appears and the solution is upgraded.Click Close.
Results
Configuring solutions
As the DataOps Suite administrator, you can configure installed solutions to optimize health and performance for your organization. Configuration settings are located in a YAML file for each solution and should be configured according to your organization's settings for that application or service. See the Solutions configuration reference to configure your individual solutions.
To access the configuration settings for an individual solution, open Solution management and navigate to the installed solution. Click the Configuration tab for the selected solution.
Managing users and permissions
For identity and access management, you can specify users and permissions to the applications and secure services provided in Lumada DataOps Suite. By default, as the LDOS administrator, you can access the Keycloak Default Realm Console to manage user and permissions settings. Keycloak provides LDOS authentication and integration with LDAP.
Roles are part of a global namespace, which is shared and manages a set of users, credentials, roles, and groups across all LDOS solutions. Administrator, Analyst, and Data Scientist are some examples of the default realm roles available to your organization. The role mappings you define set the permission types between a role and a user. You can associate a user with one or more roles, or none at all. For ease of management, you may want to assign access and permissions to specific roles rather than individual users, which can be difficult as the number of users changes over time.
To help you get started, Lumada DataOps Suite is shipped with the following default roles and sample users. Each default role and sample user combination is assigned a standard set of permissions, which correspond to roles and permissions in LDOS solutions and tools.
Default role | Sample username* | Description |
Administrator | cmoore | Full access to LDOS, including Solution management and Keycloak. |
Data Engineer | bwayne | Access to dataflow operations, including to the Data Transformation Editor, and access to Data Catalog as an Analyst. |
Data Steward | mpayton | Limited access to dataflow operations, and access to Data Catalog as a Steward. |
Analyst | cparker | Limited access to dataflow operations, and access to Data Catalog as an Analyst. |
Guest | jdoe | View-only access. |
*For each role provided in the table above, the sample username is also the password. |
Lumada DataOps Suite also provides each client with its own dedicated namespace where roles can be defined. Clients are entities that can request user authentication. Clients can be applications and services that want to use secure sign-on. They can also be entities that want to request identity information or to securely call other LDOS services. You can use client roles to further define role actions for specific users, and map those roles to actions authorized in the corresponding solutions. See the Keycloak documentation for more information.
You can also create composite roles that have one or more additional role associations. For example, Data Engineer and Analyst are composite roles that are made of client-specific roles originating from multiple solutions, including Data Catalog and the Control Plane.
View sample users, default roles, and permissions
Follow the steps below to view users, roles, and permissions:
Procedure
Click the App switcher icon and select Solution management from the drop-down menu.
The Solution management window opens.Click the Solution Control Plane card and then select the Applications tab.
The solutions and tools for LDOS are displayed.Click Keycloak Default Realm Console.
The Keycloak Admin Console default landing page appears with the settings menu for thedefault
realm displayed.Click Roles in the navigation pane of the Keycloak Admin Console.
The default roles are displayed.
Next steps
LDOS solution or tool | Operation | Default role | Client Role | Client | ||||
Administrator | Data Engineer | Data Steward | Analyst | Guest | ||||
Control Plane | View | X | X | X | X | X | read_context | control-plane-lcp-sso-client |
X | X | X | X | X | read_userInfo | control-plane-lcp-sso-client | ||
X | X | X | X | X | read_controlPlaneConfig | control-plane-lcp-sso-client | ||
X | X | X | X | X | read_staticConfig | control-plane-lcp-sso-client | ||
Configure | X | write_controlPlaneConfig | control-plane-lcp-sso-client | |||||
X | write_staticConfig | control-plane-lcp-sso-client | ||||||
X | delete_staticConfig | control-plane-lcp-sso-client | ||||||
App Switcher (in Control Plane) | View | X | X | X | X | X | read_appSwitcherConfig | control-plane-lap-sso-client |
Configure | X | write_appSwitcherConfig | control-plane-lap-sso-client | |||||
Data Catalog | Administrator | X | Administrator | lumada-data-catalog-app-server-sso-client | ||||
Steward | X | Steward | lumada-data-catalog-app-server-sso-client | |||||
Analyst | X | X | Analyst | lumada-data-catalog-app-server-sso-client | ||||
Guest | X | Guest | lumada-data-catalog-app-server-sso-client | |||||
Data Processing Service | Access to the History Server | X | view_history_server | data-processing-service-sso-client | ||||
Dataflow Studio | Manage dataflows | X | create_data_flow | dataflow-studio-sso-client | ||||
X | update_data_flow | dataflow-studio-sso-client | ||||||
X | remove_data_flow | dataflow-studio-sso-client | ||||||
Manage dataflow groups | X | get_data_flow_groups | dataflow-studio-sso-client | |||||
X | update_data_flow_groups | dataflow-studio-sso-client | ||||||
Execute dataflows | X | X | execute_data_flow | dataflow-studio-sso-client | ||||
X | X | cancel_execution | dataflow-studio-sso-client | |||||
Schedule dataflows | X | schedule_data_flow | dataflow-studio-sso-client | |||||
X | edit_schedule | dataflow-studio-sso-client | ||||||
X | remove_schedule | dataflow-studio-sso-client | ||||||
X | pause_schedule | dataflow-studio-sso-client | ||||||
X | resume_schedule | dataflow-studio-sso-client | ||||||
Dataflow Engine | Execute | X | X | create_execution | pdi-execution-manager-sso-client | |||
X | X | delete_execution | pdi-execution-manager-sso-client | |||||
X | X | get_execution_status | pdi-execution-manager-sso-client | |||||
X | X | ping | pdi-execution-manager-sso-client | |||||
Data Transformation Editor | Access to DTE | X | X | api | data-transformation-editor-sso-client | |||
X | X | executespoon | data-transformation-editor-sso-client | |||||
Solution management | Access to Solution managment | X | view | solution-control-plane-sso-client | ||||
Access to Swagger UI | X | view | solution-control-plane-swagger-sso-client | |||||
Access to Kibana | X | view | hscp-hitachi-solutions-kibana-client | |||||
Keycloak | Account management | X | X | X | X | X | view-profile | account |
X | X | X | X | manage-account | account | |||
X | manage-account-links | account | ||||||
X | delete-account | account | ||||||
X | view-applications | account | ||||||
X | view-consent | account | ||||||
X | manage-consent | account | ||||||
Realm management | X | view-realm | realm-management | |||||
X | query-realms | realm-management | ||||||
X | manage-realm | realm-management | ||||||
Clients management | X | create-client | realm-management | |||||
X | view-clients | realm-management | ||||||
X | query-clients | realm-management | ||||||
X | manage-clients | realm-management | ||||||
Users management | X | view-users | realm-management | |||||
X | query-users | realm-management | ||||||
X | query-groups | realm-management | ||||||
X | manage-users | realm-management | ||||||
X | impersonation | realm-management | ||||||
Identity Providers management | X | view-identity-providers | realm-management | |||||
X | manage-identity-providers | realm-management | ||||||
Authorization management | X | view-authorization | realm-management | |||||
X | manage-authorization | realm-management | ||||||
Events management | X | view-events | realm-management | |||||
X | manage-events | realm-management | ||||||
Realm Admin | realm-admin | realm-management |
Monitoring solution health
As an LDOS administrator, you can monitor the health of all solutions in the Installed view. Each card in the canvas displays color-coding and an icon to indicate the status of the solution.
A solution's health status is determined by a combination of its installation health and its resource health. The following table defines the possible statuses with their assigned icons:
Status | Icon | Color | Description |
Healthy |
![]() | Green | The solution is installed and running with healthy resources. |
Warning |
![]() | Yellow | The solution has issues that may impact performance, such as a health issue with component resources. |
Failed |
![]() | Red | The solution is unable to be deployed because of major failures requiring administrator attention. |
Unknown | None | Gray | Health cannot be determined. |
Click a card to inspect the specific health information for a solution. You can view the installation health and resource health in the Status tab of a solution's card to locate an issue to resolve.
Installation Health
The Installation Health card represents the current release status for the solution, which can be one of the following:
- Deployed (healthy)
- Warning (includes states such as Uninstalling, Pending-install, Pending-rollback, and Pending-upgrade)
- Failed
- Unknown
Resource Health
The Resource Health card represents the collective health of the running processes of the Kubernetes pods for the solution. To inspect the health of individual pods, click the Resource Health card to open the Resources tab.
In the Resources tab, you can inspect the list of health statuses for each pod. For example, you can see that a pod may be running, but with errors such as a high restart count. Statuses include:
Status | Icon | Description |
Healthy | ![]() | All containers are ready, or the job is complete. |
Warning | ![]() | Some but not all containers are ready, or a job is in a non-healthy but non-failed state. |
Failed |
![]() | No containers are ready, or the job has failed. |
Unknown | None | No containers have been defined. |
Rollback a revision
Perform the following steps to roll back a revision. This task assumes you are assigned admin permissions and are logged in to the Solution management window.
Procedure
In the Solution management window, select Installed in the navigation pane, and then select the solution you want to roll back.
Select the History tab.
Locate the revision that you want to roll back to and click its Rollback icon.
The Confirm Rollback dialog box opens, indicating the rollback version you selected, its status, and its compatibility.Click Rollback to proceed.
When finished, a success message appears.Click Close.
Results
Uninstall a solution
Procedure
In the Solution management window, select Installed in the navigation pane, and then select the solution you want to uninstall.
Click the More actions icon and then select Uninstall from the menu.
The uninstall process starts.
Results
Advanced administration topics
As an LDOS administrator you may need to perform advanced configuration tasks which impact your entire DataOps system, such as checking cluster volume health, certificates, and networking routes.
Viewing your cluster volumes
As the LDOS administrator, you may need to monitor the health of your cluster volumes to ensure high reliability and efficient throughput of the clusters. You can view the state of persistent volumes provisioned to support databases for the LDOS solutions on the Configurations > Storage page.
Lumada DataOps Suite uses the declared default storage class for the Kubernetes cluster. The backing storage for this class must be HA (high availability) or fault tolerant to prevent data loss.
To view your settings, from the Solution management window, click Storage in the navigation pane. The details of the persistent volume claims for storage and the installed LDOS solutions are listed in the canvas, including the claim name, size, and class of the storage. Click a row to view the storage details for a volume.
Healthy clusters have a state of Bound. When the claim requests storage for a solution, the volume dynamically provisions the physical storage to fulfill the request.
Copying the cluster certificate
Procedure
In the Solution management window, click Certificates in the navigation pane.
The Certificates view opens in the canvas.
Click Copy certificate.
Paste the certificate into your certs folder for deployment. You can now use the copy of the certificate for devices that need to trust the LDOS cluster.
Viewing networking routes
As the LDOS administrator, you can view the assigned networking routes to your LDOS solutions for each application in the cluster. You may want to use this information when considering the optimum route between two or more nodes in relation to total time, cost, and distance.
To view your current network routes in the Solution management window, click Networking in the navigation pane.
The Networking page features the Routes and Internal Ports tabs:
Routes
Use the Routes tab to view all current network connections to your solutions in Lumada DataOps Suite. Each network route has a protocol, hostname, port, path, and a destination service that processes requests.
Click a row on the Routes tab to see the route details for a network connection.
Internal Ports
Use the Internal Ports tab to view a list of all the internal ports used by LDOS solutions in the cluster. This information is useful for balancing the distribution of network or application traffic across multiple servers.