GHSInfotronic
Menü

rzip64 EN

rzip64 is a fast, interruptible program for compression of very large files.

Previous work

The rzip64 tool is based on the program rzip from Andrew Tridgell and Paul Russel. rzip is, however, built upon the bzip2 library.

Compressing the data in larger blocks is the essential enhancement of rzip compared to bzip2. Large block operations need accordingly more RAM which can be a problem sometimes. On the other hand, however, large block compression algorithms tend to reach higher compression rates.

Files that are compressed by rzip are therefore usually smaller than bzip2 compressed files.

Interruptibility

Large file compression unfortunately takes a considerable amount of time. This is a bit cumbersome especially for administrators because long running rzip jobs may conflict with typical task like rebooting the machine. That inconvenience inspired the development of rzip64.

As most other ususal compression programs rzip64 reads its input data block by block. With the new option -G (stop & go mode) each block can be saved separately after compression. Already compressed blocks ("slices") remain valid and will be available for a later resume. Storing the slices is carefully done in a way that never leaves the files in an inconsistent state.

Interrupting rzip64 can be done by simply typing CTRL-c on the command line, by signals from the operating system (e. g. at shutdown/reboot) or by a kill command from a shell script.

A quite simple start script may be used to continue the compression automatically after the next reboot. rzip64 will resume the compression task exactly at the same block that was interrupted before.

That enables an interresting application for system administrators. Huge backup files can be compressed in the night hours; when the users return in the morning rzip64 may be suspended automatically and will continue again in the next night.

rzip64 requires no user interaction at run time. When a serious problem arises (say: out of disk space) rzip64 will stop running and give some informational error messages. Anyway, the original file is left untouched until the compression is completed successfully. (When disk space becomes available again rzip64 may be restarted as usual and will resume the interrupted job).

Parallel Operation

The slice concept enables an elegant extension for multicore CPUs. By an additional command line parameter (-j) the user can specify how many cores rzip64 may occupy.

It is nevertheless still possible to interrupt rzip64 at any time. Its even possible to restart rzip64 with a different number of cores without loosing already completed slices. That enables the use of scripts that select the number of cores accordingly to the current system load of the machine.

rzip64 scales well with additional cores as long as the disk subsystem is fast enough and sufficient RAM is available. rzip64 may require up to 1 GByte RAM for each core. Processors with large caches will improve the speed of rzip64 significantly since one may think of rzip64 as a core memory stress test. Because of that rzip64 may perform very well on NUMA systems where each group of cores may have their own local memory subsystem. Because there is no need for the rzip64 processes to communicate the cores may operate completely independent of each other.

Disk space requirements

In a worst case scenario the compressed output of rzip64 has about the same size as the input file. The filesystem must provide enough space to store both files (input and output) when the compression completes. The original file is not removed before the output file is complete. The use of slices does not change that behaviour. But it also requires, however, no additional space. When the slices are merged in the final stage slices are removed immediately when they are no longer needed. As usual rzip64 will quit when not enough disk space is available. One may continue rzip64 later when more space is available.

Performance Measurements

For the following measurements a complete backup of Linux system is used. The size of the backup tar file is 103 GByte. The test system is equipped with two Intel Xeons (E5430, 2.66 GHz, 4 cores each, no hyperthreading) and 16 GByte RAM (FB-DIMMs). The system has a fast boot disk but the test files are located on a single large S-ATA disk formatted with an XFS filesystem.

Command Run time Final size Compression rate
gzip -9 Bkp.tar 4:25 hours 64.151.681.574 Byte 37,8 %
bzip2 -9 Bkp.tar 10:07 hours 62.024.265.251 Byte 39,9 %
rzip64 -9 Bkp.tar 14:03 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G Bkp.tar 14:10 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j2 Bkp.tar 8:27 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j3 Bkp.tar 6:15 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j4 Bkp.tar 5:26 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j5 Bkp.tar 4:59 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j6 Bkp.tar 4:48 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j7 Bkp.tar 4:41 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j8 Bkp.tar 4:38 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j9 Bkp.tar 5:21 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j10 Bkp.tar 5:44 hours 55.002.636.998 Byte 46,7 %
rzip64 -9 -G -j11 Bkp.tar 6:04 hours 55.002.636.998 Byte 46,7 %

rzip64 performes quite well in comparison to the well known tools gzip and bzip2. The additional overhead for the stop and go mode causes only a very light performance penality.

The total compression time can be reduced to less than a third by using multiple CPU cores. The more cores are in use, however, the smaller becomes the performance gain of additional cores. The most critical system component for large core numbers is the disk subsystem. A fast RAID with an efficient filesystems may have a large performance impact.

The last lines reveal the I/O subsystem bottleneck. For these measurements a higher number of cores was selected than physically available. The results show that the processes start to interfere with each other, decreasing the overall performance significantly (see benchmark details).

Finally the decompression times should not be neglected:

Command Run time Final size  
gzip -d Bkp.gz 0:18 hours 103.214.827.520 Byte  
bzip2 -d Bkp.bz2 4:18 hours 103.214.827.520 Byte
rzip64 -d Bkp.rz 3:17 hours 103.214.827.520 Byte

When it comes to decompression gzip performs very well. On the other hand the gzip output is significantly larger than the rzip64 output.

The comparison to bzip2 is clearly won by rzip64. The large timing difference is surprisingly at first since bzip2 and rzip64 are both founded on the same basis algorithms. rzip64 operates on much larger blocks and appears to benefit from long distance similarities to a high degree. The 16 GByte of available RAM may be of some use as well.

Open Source

rzip64 is available as open source and covered by the GNU General Public License in its most recent version.

Although rzip64 is already in use in productive applications there are no guarantees at all. You are using rzip64 completely on your risk. When dealing with large data sets its always a good idea to use checksums to ensure data integrity.

Compatibility

Files that are compressed by rzip64 are fully compatible to rzip and vice versa.

In contrast to gzip and bzip2, rzip64 needs more RAM. If systems constraints are tight it may happen that a compressed file can not be decompressed on a smaller machine. That can be avoided by using the same system for compression and decompression.

Downloads

rzip64 is available here:

rzip64-3-0.tgz