Using Spark with IIoT Core Services
Spark is a unified open-source analytics engine for large-scale data processing in clustered environments.
The Kubernetes Operator for Apache Spark allows you to specify and run Spark applications just like any other workloads on Kubernetes. It uses Kubernetes custom resources for specifying, running, and conveying the status of Spark applications. For a complete reference on the custom resource definitions, see the API Definition for the Kubernetes Operator. For details on its design, see the Kubernetes Operator for Apache Spark Design. It requires Spark 2.3 and above, which supports Kubernetes as a native scheduler backend.
For information on how to use Spark with IIoT Core Services, contact your Hitachi Vantara representative.