Using Spark with PDI
You can run a Spark job with the Spark Submit job entry or execute a PDI transformation in Spark through a run configuration.
These instructions explain how to use the Spark Submit job entry.
Install the Spark Client
Before you start, you must install and configure the Spark client according to the instructions in the Spark Submit job entry, which can be found here: Spark Submit.
Modify the Spark Sample
The following example demonstrates how to use PDI to submit a Spark job.
Open and Rename the Job
To copy files in these instructions, use either the Hadoop Copy Files job entry or Hadoop command line tools. For an example of how to do this using PDI, check out our tutorial at http://wiki.pentaho.com/display/BAD/Loading+Data+into+HDFS.
Procedure
Copy a text file that contains words that you would like to count to the HDFS on your cluster.
Start the PDI client (also known as Spoon).
Open the Spark Submit.kjb job, which is in design-tools/data-integration/samples/jobs.
Select Spark Submit Sample.kjb.
, then save the file as
Results

Submit the Spark Job
Procedure
Open the Spark PI job entry.
Spark PI is the name given to the Spark Submit entry in the sample.Indicate the path to the spark-submit utility in the Spark Submit Utility field.
It is located in where you installed the Spark client.Indicate the path to your spark examples jar (either the local version or the one on the cluster in the HDFS) in the Application Jar field.
The Word Count example is in this jar.In the Class Name field, add the following: org.apache.spark.examples.JavaWordCount.
We recommend that you set the Master URL to yarn-client.
To read more about other execution modes, see https://spark.apache.org/docs/2.2.0/submitting-applications.htmlIn the Arguments field, indicate the path to the file you want to run Word Count on.
Click the OK button.
Save the job.
Run the job.
Results