Skip to main content

Pentaho+ documentation has moved!

The new product documentation portal is here. Check it out now at docs.hitachivantara.com

 

Hitachi Vantara Lumada and Pentaho Documentation

Pentaho data mining (Weka) performance tips

Parent article

The most common Weka performance issue is the OutOfMemory exception.

This is caused by using resource-intensive algorithms with large data sources. To address this, refer to: Increase the Memory Limit in Weka

Learning algorithms convert multi-valued discrete fields to binary indicator fields, thus potentially expanding the total number of fields. This sort of pre-processing can result in two copies of the data being held in main memory briefly until the transformation is complete. So even if you have enough memory to complete the task, it could take a while to perform. For this reason, you may need to run Weka on very fast multi-core, multi-CPU 64-bit machines if you are concerned with poor performance.

Beyond this, data mining tuning involves looking at each algorithm you're using and tweaking its parameters to improve the speed and accuracy of the results. This is always data- and algorithm-specific, and requires empirical experimentation. If you are running out of memory or experiencing poor performance, you might consider switching to an incrementally learning algorithm such as:

  • Naive Bayes
  • Naive Bayes multinomial
  • DMNBtext
  • AODE and AODEsr
  • SPegasos
  • SGD
  • IB1, IBk and KStar
  • Locally weighted learning
  • RacedIncrementalLogitBoost
  • Cobweb

See this page on the Pentaho Wiki for more details: http://wiki.pentaho.com/display/DATAMINING/Handling+Large+Data+Sets+with+Weka