Using Spark with PDI
You can run a Spark job with the Spark Submit job entry or execute a PDI transformation in Spark through a run configuration.
These instructions explain how to use the Spark Submit job entry.
Install the Spark Client
Before you start, you must install and configure the Spark client according to the instructions in the Spark Submit job entry, which can be found here: Spark Submit.
Modify the Spark Sample
The following example demonstrates how to use PDI to submit a Spark job.
Open and Rename the Job
To copy files in these instructions, use either the Hadoop Copy Files job step or Hadoop command line tools. For an example of how to do this using PDI, check out our tutorial at http://wiki.pentaho.com/display/BAD/Loading+Data+into+HDFS.
- Copy a text file that contains words that you’d like to count to the HDFS on your cluster.
- Start Spoon.
- Open the Spark Submit.kjb job, which is in <pentaho-home>/design-tools/data-integration/samples/jobs.
- Select File > Save As, then save the file as Spark Submit Sample.kjb.
Submit the Spark Job
To submit the spark job, complete the following steps.
- Open the Spark PI job entry. Spark PI is the name given to the Spark Submit entry in the sample.
- Indicate the path to the spark-submit utility in the Spark Submit Utility field. It is located in where you installed the Spark client.
- Indicate the path to your spark examples jar (either the local version or the one on the cluster in the HDFS) in the Application Jar field. The Word Count example is in this jar.
- In the Class Name field, add the following: org.apache.spark.examples.JavaWordCount.
- We recommend that you set the Master URL to yarn-client. To read more about other execution modes, see https://spark.apache.org/docs/1.2.1/submitting-applications.html.
- In the Arguments field, indicate the path to the file you want to run Word Count on.
- Click the OK button.
- Save the job.
- Run the job. As the program runs, you will see the results of the word count program in the Execution pane.