set bigjobthreshold
set bigjobthreshold= { n | off}
This option helps determine if and when the output from a content pipeline is too large to be written to memory, and should be written to disk instead. When the number of document fragments in the document being published exceeds the threshold value n, the document is considered large enough to write the content pipeline output to disk rather than to memory. Setting n to 0 causes pipeline output to be written to disk for all publishing requests.
This set option has document scope. The default value is off.
Some documents are so large that a content pipeline may not be able to write its output to memory. Writing a document into memory uses more memory than reading a document from disk into memory. Arbortext Editor and Arbortext Publishing Engine normally write output to memory, but can, under specified conditions, write pipeline output for a large document to disk to minimize memory consumption.
If content pipeline output is written to disk, then Arbortext Editor or the Arbortext PE sub-process immediately reads the document from disk into memory and proceeds as if the content pipeline had written the document into memory. After the pipeline output is in memory (regardless of the method used), processing continues with other tasks, such as running the formatter, the HTML Help compiler, or some other action.
Set the level at which the pipeline output is written to disk by specifying n for the number of document fragments (tags, text entities, file entities, and XML inclusions) in the document. For example, if you set bigjobthreshold=25, any document containing more than 25 document fragments will be written to disk during pipeline processing.
The ACL function
doc_estimate_dfs can be helpful in determining what values to use for the
bigjobthreshold option.
When publishing a DITA map document, the overall size of the document produced by the content pipeline is determined by the size of the DITA topics that are referenced from the map rather than by the size of the map itself. To help account for this difference, the number of document fragments in the map is multiplied by 80 before being compared to the threshold. This approximation allows a common threshold to be used for DITA topic, DITA map, and non-DITA document publishing. This means the bigjobthreshold value should be set to a value roughly 80 times larger than the value returned by the doc_estimate_dfs function for a DITA map for which the output should be written to disk.
For more information on using Arbortext Publishing Engine for publishing, refer to the Programmer's Guide to Arbortext Publishing Engine, the Programmer's Reference, and the Content Pipeline Guide.
Related Topics