Monitoring job activity
Pentaho Data Catalog maintains a history of job activity. To view this history, click Management in the left navigation menu and click Job Activity on the Job Management card.
On the Job Activity page, you can view all the individual job sequences submitted by users and the status of those jobs. The jobs and their details are listed in the order in which they were submitted.
The Job Activity page contains the following details for each job:
Column | Description |
Start | Start time for the job. |
Sequence Name | Name of the sequence triggering the job. If a job is triggered using a sequence, the Template Name field is blank. |
Template Name | Name of the template triggering the job. |
Asset Name | Name of the target resource of the job. |
Agent | Name of the agent performing the job. |
Time Elapsed | Elapsed time of the job. |
Submitted By | The user that triggered the job. |
Status | The job completion status. |
- Select columns to view
- Apply filters
- Click the More actions icon to view the job details
- Click the down arrow to view the steps of the activity
Using the Job Activity page
You can view the details of a job by clicking the More actions icon at the end of the row for the job and selecting View Details. The Job Info dialog displays, detailing the sequence or sequences executed for that job.
The icon in the Status column indicates the status of the job. Even when the Status indicates Success, the processing status for one of the resources may have failed. To verify the status of the resources, click the down arrow at the end of the job row to view the Instance Steps. For more details, click the right arrow for each step to check for the counts of Success, Skipped, Incomplete, and other status.
Status | Meaning |
Submitted | The initial status of a job while it is being set up. This status transitions to In Progress. |
In Progress | Indicates that the job processes are running. |
Success | Indicates that the job has completed successfully. For Profile Combo sequences the status switches between In Progress and Success/Failed for each of the sub jobs involved. |
Success with Warnings |
Indicates that the job has completed successfully, but that parts of the job were unsuccessful. Common reasons include:
When the job status is Success with Warnings, a FileNotFound error may appear in the Data Catalog job logs. For more details, refer to the Job Info dialog or to the job logs. The default log location is the var/log/ldc directory. |
Failed | Indicates that the job has failed. See the individual job logs for details. |
Cancelling & Cancelled | The Cancelling status indicates a cancel action is in progress. The Cancelled status appears when the job dependencies have been successfully freed and the cancel action has succeeded. |
Incomplete | The count of resources within the data asset that could not finish discovery due to issues. The numbers indicate total file skips + incompletes/total files. |
Skipped | The job was skipped. For example, the job may not have matched its resource permissions. |
Viewing details about a job
When you select Job Info dialog for information about that job. To view more information, click the down arrow on a job row to view the Instance Steps, and click the right arrow for each step to view information about the steps.
on a job, you can view theJob Info
You can view the following job details in the Job Info dialog:
Field | Description |
Instance Id | The internal reference ID used by Data Catalog. |
Asset Type | Identifies the type of asset on which the job was executed. |
Asset Name | The name of the asset. |
Asset Path List | Specifies the path of the resource or asset. |
Start | Time the job began. |
Elapsed | Shows the ending status of the job. |
Instance Steps
You can view the following information in the Instance Steps dialog:
Field | Description |
Status | Status of the step. |
Command | The internal command Data Catalog executes for the job. You can also execute these commands from the command line. |
Execution id | The Spark application ID for the current job instance. |
Spark event log | The path to the Spark event log. |
Total Size | Total size of the asset processed. |
Success | Count of resources that were successfully processed. |
Skipped | Count of resources that were skipped. Directories are identified as file structures and are skipped while their contents are processed successfully. Resources that are identified as corrupt or in an unsupported format are skipped. Click the count links to view details. |
Incomplete | Count of resources that were incompletely processed. Click the count link to view details. |
Lineage Inserted | Count of lineages inserted for the current lineage job. |
Term Associations Inserted | Count of term associations inserted for the current term job. |
Term Associations Removed | Count of term associations removed for the current term job. |
Start | Time the step began. |
End | Time the step ended. |
View Log File (button) | View the log file contents for the job. |
Download Log (button) | Downloads the log for the step to your local machine. |
Terminate a job instance
Procedure
Click Management in the left navigation menu and click Job Activity on the Job Management card.
Click the More actions icon on the row of the job you want to terminate and click Terminate Instance.
The instance is terminated. The status for terminated instances changes to Cancelled.NoteBecause the termination of an instance is handled by the Spark engine, a terminated sequence may display a Success status despite the terminate action command from the user interface. This situation is more likely to occur with single sequence commands like format/schema/profile/tag when triggered on small-sized resources.
Monitoring Spark jobs
When you run Data Catalog jobs, Data Catalog submits these jobs as Spark context to the Apache Spark™ engine. Relevant Spark logging information is captured in the Data Catalog logs.
For example:
INFO | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main] - spark.yarn.historyServer.address = hdp265.ldc.com:18081 INFO | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main] - spark.driver.extraJavaOptions = -Dldc.log.dir=/var/log/ldc -Dldc.home=/opt/ldc/agent -Dlog4j.configuration=log4j-driver-client.xml -Djavax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl -Djavax.xml.transform.TransformerFactory=com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl -Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl -Dldc.kerberos.keytab.file=ldcuser.keytab -Dldc.kerberos.principal=ldcuser@hitachivantara.com INFO | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main] - spark.history.ui.port = 18081 INFO | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main] - spark.driver.extraClassPath = /opt/ldc/agent/lib/dependencies/commons-lang3-3.9.jar:/opt/ldc/agent/conf:/opt/ldc/agent/keytab:/opt/ldc/agent/ext/hive-serde-1.0.1.jar:/opt/ldc/agent/ext/mssql-jdbc-8.2.0.jre8.jar:/opt/ldc/agent/ext/postgresql-42.2.5.jar:/opt/ldc/agent/ext/ldc-hive-formats-2019.3.jar INFO | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main] - spark.driver.extraLibraryPath = /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 INFO | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main] - spark.history.kerberos.principal = spark-hdp265@hitachivantara.com
Alternatively, you can monitor Apache Spark jobs at one of the following sites:
- Cloudera's Apache Spark History Server:
http://<cluster IP address>:8888/jobbrowser
- Ambari's Apache Spark History Server:
http://<cluster IP address>:19888/jobhistory
You may need to additionally specify the Data Catalog service user or, if the service user has a corresponding account, sign in using that user account.
Collecting debugging information
Data Catalog features multiple sources of monitoring information. If you encounter a problem, collect the following information for Data Catalog support.
Job messages
Data Catalog generates console output for jobs run at the command prompt. If the job encounters problems, review the job output for clues to the problem. These messages appear on the console and are collected in log files with the debug logging level: /var/log/ldc/ldc-jobs.log.
Spark job messages
Many Data Catalog jobs trigger Spark operations. These jobs produce output that is accessible on the cluster job history server and also through Hue’s job browser or the Ambari Spark job history.
You can open the Spark job log that corresponds to the profiling operation for a given file or table by opening that resource in the browser and following the link in the profile status.