ROOT I/O study for ATLAS LAr-style event objects (part 2)

This is a continuation of a previous study. It includes the effects of using ROOT TClonesArrays in the event structure, and compares different compression levels.

The persistent event class LArRootEvent described in the previous study stored hits in a ROOT container class called TOrdCollection. However, the ROOT Users Guide suggests that for a collection of dynamically-allocated constantly-changing objects, a TClonesArray is a more efficient container class in both execution time and I/O speed.

In the course of this study, it was noted that there is also a trade-off of I/O speed versus file size due to a choice of compression level. As of ROOT 3.00/06, only three compression levels have any meaning:

This study used the LArRootEvent class from the previous study with the TOrdCollections of primaries and hits replaced by TClonesArrays. The procedure for randomly producing event content was unchanged. In all other respects the I/O methods were the same; note that TDataSets are a different I/O persistent scheme that does not use TClonesArrays.

The study was performed for 100, 1000, and 10,000 events in a file, and repeated with compression level=0, 1, and 2. For the sake of brevity, only the most important result is reported below: that of 10,000 events with compression level=1. The entire set of results is available in a spreadsheet in both StarOffice Calc (.sdc) format and Microsoft Excel 97 (.xls) format.

These tests were run using ROOT 3.00/05, compiled with the gcc 2.95.2 compiler, and executed on a dual-processor 400 MHz Intel PIII. The individual phases were timed using ROOT's TStopwatch class. The Write and Read phases (described in the previous study) include the time of ROOT's I/O compression. Note that the times associated with reading TDataSets are not consistent with the previous study; there is no explanation at present.

Output program phases
Time for 10,000 events (in CP secs):

Scheme         Generate   Convert   Write     Total   File Size
Tree0            300        196      1217      1814     1.1G
Tree1            301        198       889      1496     1.1G
Object           313        203      1265      1889     1.1G
TDataSet         314        103      1330      1858     1.2G
TDataSet-Tree    300         97      1262      1760     1.2G


Input program phases
Time for 10,000 events (in CP secs):

Scheme          Read      Convert  Histogram  Total
Tree0           1126          0        43      1170
Tree1            306          0        44       350
Object          1122          0        46      1271
TDataSet         757        824        53      1737
TDataSet-Tree    797        893        49      1739

For reference purposes, the source code used in this study is available here (in tar.gz format). Note that the code (the ".at" scripts in particular) will not execute unchanged outside of the Nevis Linux cluster.


Up to the Nevis ATLAS Page.
Back to the previous Page.
[E-mail] Send comments and suggestions to the webmaster