Skip to main content

Pentaho+ documentation has moved!

The new product documentation portal is here. Check it out now at docs.hitachivantara.com

 

Hitachi Vantara Lumada and Pentaho Documentation

Hadoop File Output

 

Parent article

The Hadoop File Output step exports data to text files stored on a Hadoop cluster. It is commonly used to generate comma separated values (CSV files) that are easily read by spreadsheet applications. You can also generate fixed-width files by setting lengths on the fields in the Fields tab.

General

 

Enter the following information in the transformation step name field.

  • Step Name: Specifies the unique name of the Hadoop File Output step on the canvas. You can customize the name or leave it as the default

Options

 

The Hadoop File Output transformation step features several tabs with fields. Each tab is described below.

File tab

 

File tab

The File tab contains the following options that define the basic properties for the file being created:

Option Description
Hadoop Cluster

Specify which Hadoop cluster configuration to use.

You can specify information like host names and ports for HDFS, Job Tracker, and other big data cluster components through the Hadoop Cluster configuration dialog box. Click Edit to edit an existing cluster configuration in the dialog box, or click New to create a new configuration with the dialog box. Once created, Hadoop cluster configurations settings can be reused by other transformation steps and job entries. See Connecting to a Hadoop cluster with the PDI client for more details on the configuration settings.

Folder/File Specify the location and/or name of the output text file written to the Hadoop Cluster. Click Browse to navigate to the source file or folder in the VFS browser.
Create Parent Folder Indicate if a parent folder should be created for the output text file.
Do not create file at start Avoid empty files when no rows are processed.
Accept file name from field?

Indicate if you want to specify the file name(s) in a field in the input stream.

This setting can be fine-tuned with the kettle.properties file. See Improving performance when writing multiple files .

File name field Specify the field that contains the filename(s) in the input stream during runtime.
Extension Add an extension to the end of the file name. The default is .txt.
Include stepnr in filename Include the copy number in the file name (_0 for example) when you run the step in multiple copies (launching several copies of a step).
Include partition nr in file name? Include the data partition number in the file name.
Include date in file name Include the system date in the filename (_20181231 for example).
Include time in file name Include the system time in the filename (_235959 for example).
Specify Date time format Indicate you want to specify the date time format from the list in the Date time format drop-down list.
Date time format Specify date time formats.
Show file name(s) Display a list of the files generated. The list is a simulation and depends on the number of rows that go into each file.
Add filenames to result Add the filename to the internal file result set.

Content tab

 

Content tab

The Content tab contains the following options for describing the content written to the output text file:

Option Description
Append Append lines to the end of the specified file.
Separator Specify the character that separates the fields in a single line of text. Typically, it is a semicolon (;) or a tab. Click Insert TAB to place a tab in the Separator field.
Enclosure Enclose fields with a pair of specified strings. It allows for separator characters in fields. This setting is optional and can be left blank.
Force the enclosure around fields? Force all field names to be enclosed with the character specified in the Enclosure property.
Header Indicate if the output text file has a header row (first line in the file).
Footer Indicate if the output text file has a footer row (last line in the file).
Format Specify the type of formatting to use. It can be either DOS or UNIX. UNIX files have lines separated by line feeds, while DOS files have lines separated by carriage returns and line feeds.
Compression Specify the type of compression (ZIP or GZIP) to use when compressing the output. Only one file is placed in a single archive.
Encoding Specify the text file encoding to use. Leave blank to use the default encoding on your system. To use Unicode, specify UTF-8 or UTF-16. On first use, PDI searches your system for available encodings.
Right pad fields Add spaces to the end of the fields (or removes characters at the end) until the length specified in the table under the Fields tab is reached.
Fast data dump (no formatting) Improve the performance when dumping large amounts of data to a text file by not including any formatting information.
Split every ... rows If the number N is larger than zero, split the output text file into multiple parts of N rows.
Add Ending line of file Specify an alternate ending row to the output file.

Fields tab

 

The Fields tab is where you define properties for the fields being exported. The following table describes each field:

Field Description
Name The name of the field
Type Type of the field can be either String, Date or Number.
Format An optional mask for converting the format of the original field.
Length

The length of the field depends on the following field types:

  • Number

    Total number of significant figures in a number.

  • String

    Total length of string.

  • Date

    Length of printed output of the string (for example, four is a length for a year).

Precision Number of floating point digits for number-type fields.
Currency Symbol used to represent currencies ($5,000.00 or €5.000,00 for example).
Decimal A decimal point can be a period (.) as in 10,000.00 or it can be a comma (,) as in 5.000,00.
Group A grouping can be a comma (,) as in 10,000.00 or it can be a period (.) as in 5.000,00.
Trim Type The trimming method to apply to a string. Trimming only works when no field length is specified.
Null If the value of the field is null, the specified string is inserted into the output text file.

Metadata injection support

 

All fields of this step support metadata injection except for the Hadoop Cluster field. You can use this step with ETL metadata injection to pass metadata to your transformation at runtime.