Skip to main content

Pentaho+ documentation is moving!

The new product documentation portal is here. Check it out now at docs.hitachivantara.com

 

Hitachi Vantara Lumada and Pentaho Documentation

General issues

Parent article

Follow the suggestions in these topics to help resolve common software issues associated with the Pentaho suite:

See Pentaho Troubleshooting articles for additional topics.

JDBC driver issues

Before you begin

Before you begin troubleshooting suspected JDBC driver issues, make sure that the correct JDBC driver JARs are installed in the correct locations. You could install your drivers with our JDBC Distribution Tool to ensure they are placed in the correct locations. Also, make sure there are no conflicting driver versions installed. Confirm with your database or driver vendor if you suspect you have JDBC driver compatibility issues.
The Pentaho Server needs the appropriate driver to connect to the database that stores your data. You can download drivers from your database vendor's website. Check the JDBC drivers reference for a list of supported drivers and links to vendor websites.

Perform the following steps to install the appropriate driver for your Pentaho Server:

Procedure

  1. Stop the Pentaho Server.

  2. Copy your driver into this location: <pentaho-install-directory>/server/pentaho-server/tomcat/lib.

  3. Start the Pentaho Server.

Data conversion issues with MySQL driver 5.0.8

The MySQL JDBC driver version 5.0.8 may cause data conversion errors in the Pentaho Server. For example, SQL statements that convert a numeric field to a string are returned as a string in version 5.0.7, but return as a byte array in version 5.0.8.

To solve this problem, you must replace the mysql-connector-java-5.0.8.jar with the mysql-connector-java-5.0.7.jar in your client tool or application server's lib folder

Fixing JTDS varchar(Max) limitations in MSSQL 2005

Creating a report that uses a column of type varchar(MAX) may result in a net.sourceforge.jtds.jdbc.ClobImpl@83cf00 error when using the JTDS SQL Server driver. This is caused by inherent limitations in older versions of the JTDS driver. Additionally, the SQL Server JDBC driver version 1.2.2828 also has issues accessing a varchar(MAX) column.

The solution is to upgrade the MSSQL 2005 JDBC driver to version 1.0.809.102 or later. Download and install the http://msdn.microsoft.com/en-us/sqlserver/aa937724 file from Microsoft.com, then restart your MSSQL server.

Snowflake timeout errors

When you are pooling the Pentaho database connection, you may see errors. Snowflake JDBC connections use a default timeout of four hours which contributes to these errors because the same connection can be reused for more than four hours. See the Snowflake documentation for further details.

To resolve this issue, do one of the following actions:

Using a validation query

In Pentaho, you can execute a validation query every time a connection is borrowed from the pool. If the query fails, such as when the connection has timed out, the connection is evicted from the pool and a new connection is created.

Perform the following steps to use a validation query:

Procedure

  1. Open the Database Connection dialog box, then access the Pooling tab.

  2. Move down in the Parameters table and select the check box to the left of the validationQuery parameter.

  3. Set the parameter value to select 1.

Next steps

See Define connection pooling for more information on setting database connection pooling parameters.

Disabling the Snowflake timeout

This method avoids running the validation queries, but leaves the connection open indefinitely.

Perform the following steps to disable the timeout:

Procedure

  1. Open the Database Connection dialog box, then access the Options tab.

  2. Enter in the CLIENT_SESSION_KEEP_ALIVE parameter and set its value to true.

Next steps

See the Snowflake documentation for more details on the CLIENT_SESSION_KEEP_ALIVE parameter.

Log table data is not deleted

When you run a job or transformation, data in the log table is not deleted as expected:

  • When a job runs, the value in the Log line timeout (days) field, along with the LOGDATE and JOBNAME or TRANSNAME fields, which are selected in the Log table field pane, determine when to delete the data in the log table. These fields are in the Job properties window in PDI. See Set up job logging for details.
  • When a transformation runs, the value in the Log record timeout (in days) field, along with the LOGDATE and TRANSNAME fields, which are selected in the Fields to log pane, determine when to delete the data in the log table. These fields are in the Transformation properties window in PDI. See Set up transformation logging for details.

If the name of an existing job or transformation is changed, then any log table entries with the previous name are no longer recognized and are no longer deleted from the table when the Log line timeout (days) or the Log record timeout (in days) value is present.

If you find that your log table is becoming too large, you should manually purge these unrecognized log table rows using your SQL editor.