Jeremy S. De Bonet : LZd: Lempel-Ziv differential compression




Resume
RESEARCH
     
Publications
Image Compression
Texture Synthesis
Image Database Retrieval
Segmentation
Registration
Discrimination
Projects
Web Hacks


I've developed a dictionary based universal compression scheme which is able to achieve high compression rates in situations where Lempel-Ziv type techniques fail.

Lempel-Ziv techniques utilize string matching to reduce the code length of commonly occuring strings. This works well for text and some other types of data which consist of a small symbol alphabet.

Because the dictionary construction techniques build and reuse common strings quickly, for text data, these methods reach bit rates which quickly approach the true entropy of the source.

However, in data which can take on values over a large range (i.e. a large alphabet) -- as is the case with sounds and images -- sequences of symbols are rarely repeated exactly, but are often repeated approximately. As a result few string matches are found by LZ techniques, and it takes a lot of data to build up a dictionary of long phrases. LZ methods are universal, so they eventually will achieve compression at the true entropy of the source, but could take an arbitrarily long time to do so.

Since we care about finite length data, the rate of convergence to the true entropy is important.

LZd (for Lempel-Ziv differential) is dictionary based universal compression scheme which uses approximate string matches to compress data such as images and sounds. At the heart of LZd is a balancing act between building long phrases which approximately match the signal, and the cost encoding the residual error of the approximate match.


Figure 1: A 'toy-example' source signal generated by adding noise to a sine wave. Since such a concise description exists, the Kolmogrov complexity of this source is dominated by the description of the noise. Equivalently, almost all the entropy in this signal is contained in the noise.

Figure 2: Because the noise is uncorrelated with the underlying sine wave, standard 'dictionary based' or 'string matching' compression techniques, such as the LZ77, LZ78 and LZW schemes, can not exploit the redundancy in this signal, due to the fact that there are no strings of substantial length which match exactly. The method I've developed looks only for approximate matches, and trades off string length versus incurred error.

In this example, the blue curve shows the approximate string match which is computed with this method.

Figure 3: The residual error of the approximate string match versus the true signal is shown here in red, with true random portion of the signal (the noise) shown in grey. Within one and a half cycles of the sine wave it converges to the true entropy of the source.


Bit rates on sound file test00.wav

Raw Wav file 16.001
symbol entropy 13.136

LZd 7.726
shorten 8.198
LZH 13.856
gzip 13.960
sq (CPM squeeze) 14.544

Table 1: Here are compression rates (in bits per sample) for a sound file which contains saxophone music. Even though this coding method does not utilize any prior information about the source, it is able to surpass the performance of shorten which is designed specifically for sound signals



Jeremy S. De Bonet
jsd@debonet.com
return to main page

Page loaded on October 23, 2017 at 06:20 PM.
Page last modified on 2006-05-27
Copyright © 1997-2017, Jeremy S. de Bonet. All rights reserved.