Skip to main content

Pentaho+ documentation has moved!

The new product documentation portal is here. Check it out now at


Hitachi Vantara Lumada and Pentaho Documentation

Monitoring job activity

Parent article

Lumada Data Catalog maintains a history of job activity. To view this history, navigate to Management and click Job Activity on the Job Management tile. Optionally, click View Jobs on the Job Management tile.

On the Job Activity page, you can view all the individual job sequences submitted by users and the status of those jobs. The jobs and their details are listed in the order in which they were submitted.

The Job Activity page contains the following details for each job:

StartStart time for the job.
Sequence NameName of the sequence triggering the job. If a job is triggered using a sequence, the Template Name field is blank.
Template NameName of the template triggering the job.
Asset NameName of the target resource of the job.
AgentName of the agent performing the job.
Time ElapsedElapsed time of the job.
Submitted ByThe user that triggered the job.
StatusThe job completion status.
You can also do the following tasks in the view:
  • Select columns to view
  • Apply filters
  • Click the More actions icon to view the job details
  • Click the down arrow to view the steps of the activity

Using the Job Activity page

You can view the details of a job by clicking the More actions icon at the end of the row for the job and selecting View Details. The Job Info dialog displays, detailing the sequence or sequences executed for that job.

The icon in the Status column indicates the status of the job. Even when the Status indicates Success, the processing status for one of the resources may have failed. To verify the status of the resources, click the down arrow at the end of the job row to view the Instance Steps. For more details, click the right arrow for each step to check for the counts of Success, Skipped, Incomplete, and other status.

SubmittedThe initial status of a job while it is being set up. This status transitions to In Progress.
In ProgressIndicates that the job processes are running.
SuccessIndicates that the job has completed successfully. For Profile Combo sequences the status switches between In Progress and Success/Failed for each of the sub jobs involved.
Success with Warnings

Indicates that the job has completed successfully, but that parts of the job were unsuccessful.

Common reasons include:

  • Problems with permission to the resources on which the job is being performed.
  • Conditions outside of Data Catalog that are incorrect, such as Zookeeper is inaccessible.

When the job status is Success with Warnings, a FileNotFound error may appear in the Data Catalog job logs. For more details, refer to the Job Info dialog or to the job logs. The default log location is the var/log/ldc directory.

FailedIndicates that the job has failed. See the individual job logs for details.
Cancelling & CancelledThe Cancelling status indicates a cancel action is in progress. The Cancelled status appears when the job dependencies have been successfully freed and the cancel action has succeeded.
IncompleteThe count of resources within the data asset that could not finish discovery due to issues. The numbers indicate total file skips + incompletes/total files.
SkippedThe job was skipped. For example, the job may not have matched its resource permissions.

Viewing details about a job

When you select More actions View Details on a job, you can view the Job Info dialog for information about that job. To view more information, click the down arrow on a job row to view the Instance Steps, and click the right arrow for each step to view information about the steps.

Job Info

You can view the following job details in the Job Info dialog:

Instance IdThe internal reference ID used by Data Catalog.
Asset TypeIdentifies the type of asset on which the job was executed.
Asset NameThe name of the asset.
Asset Path ListSpecifies the path of the resource or asset.
StartTime the job began.
ElapsedShows the ending status of the job.

Instance Steps

You can view the following information in the Instance Steps dialog:

StatusStatus of the step.
CommandThe internal command Data Catalog executes for the job. You can also execute these commands from the command line.
Execution idThe Spark application ID for the current job instance.
Spark event logThe path to the Spark event log.
Total SizeTotal size of the asset processed.
SuccessCount of resources that were successfully processed.
SkippedCount of resources that were skipped. Directories are identified as file structures and are skipped while their contents are processed successfully. Resources that are identified as corrupt or in an unsupported format are skipped. Click the count links to view details.
IncompleteCount of resources that were incompletely processed. Click the count link to view details.
Lineage InsertedCount of lineages inserted for the current lineage job.
Term Associations InsertedCount of term associations inserted for the current term job.
Term Associations RemovedCount of term associations removed for the current term job.
StartTime the step began.
EndTime the step ended.
View Log File (button)View the log file contents for the job.
Download Log (button)Downloads the log for the step to your local machine.

Terminate a job instance

Perform the following steps to terminate a job instance after it has been submitted:


  1. Navigate to Management and click Job Activity on the Job Management tile. Optionally, you can also click View Jobs on the Job Management tile.

  2. Click the More actions icon on the row of the job you want to terminate and click Terminate Instance.

    The instance is terminated. The status for terminated instances changes to Cancelled.
    NoteBecause the termination of an instance is handled by the Spark engine, a terminated sequence may display a Success status despite the terminate action command from the user interface. This situation is more likely to occur with single sequence commands like format/schema/profile/tag when triggered on small-sized resources.

Monitoring Spark jobs

When you run Data Catalog jobs, Data Catalog submits these jobs as Spark context to the Apache Spark engine. Relevant Spark logging information is captured in the Data Catalog logs.

NoteTo capture logging information, configure the Spark history using the External logging solution URL template configuration setting for Spark logging information.

For example:

INFO  | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main]  -    spark.yarn.historyServer.address =
INFO  | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main]  -    spark.driver.extraJavaOptions = -Dldc.log.dir=/var/log/ldc -Dldc.home=/opt/ldc/agent -Dlog4j.configuration=log4j-driver-client.xml -Dldc.kerberos.keytab.file=ldcuser.keytab
INFO  | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main]  -    spark.history.ui.port = 18081
INFO  | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main]  -    spark.driver.extraClassPath = /opt/ldc/agent/lib/dependencies/commons-lang3-3.9.jar:/opt/ldc/agent/conf:/opt/ldc/agent/keytab:/opt/ldc/agent/ext/hive-serde-1.0.1.jar:/opt/ldc/agent/ext/mssql-jdbc-8.2.0.jre8.jar:/opt/ldc/agent/ext/postgresql-42.2.5.jar:/opt/ldc/agent/ext/ldc-hive-formats-2019.3.jar
INFO  | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main]  -    spark.driver.extraLibraryPath = /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
INFO  | 2020-07-07 07:31:54,369 | profile | ResourceExplorerRunner [main]  -    spark.history.kerberos.principal =

Alternatively, you can monitor Apache Spark jobs at one of the following sites:

  • Cloudera's Apache Spark History Server: http://<cluster IP address>:8888/jobbrowser
  • Ambari's Apache Spark History Server: http://<cluster IP address>:19888/jobhistory

You may need to additionally specify the Data Catalog service user or, if the service user has a corresponding account, sign in using that user account.

Collecting debugging information

Data Catalog features multiple sources of monitoring information. If you encounter a problem, collect the following information for Data Catalog support.

  • Job messages

    Data Catalog generates console output for jobs run at the command prompt. If the job encounters problems, review the job output for clues to the problem. These messages appear on the console and are collected in log files with the debug logging level: /var/log/ldc/ldc-jobs.log.

  • Spark job messages

    Many Data Catalog jobs trigger Spark operations. These jobs produce output that is accessible on the cluster job history server and also through Hue’s job browser or the Ambari Spark job history.

    You can open the Spark job log that corresponds to the profiling operation for a given file or table by opening that resource in the browser and following the link in the profile status.