back | home | abstract | contents | next chapter
1.2.1 Vector Quantization
without Memory
Unconstrained Vector Quantization
To encode a source vector the closest codevector (nearest neighbor) in the optimal codebook is determined by using different search methods.
Full Search Vector Quantization For given sources vector the distortion to each codevector in the codebook is computed. The codevector with the minimal distortion is chosen as the reproduction vector.
Fast Search Vector Quantization To speed up the search procedure only a subset of codevectors is used. To decide which codevectors have to be considered an inherent property of the source vectors is used. Such properties are obtained by a transformation like principal component analysis, Walsh-Hadamard transformation, discrete cosine transformation or hyperplane partitioning. An other method is to use a geometrical criterion such as the triangular inequality to exclude codevectors from the search.
Constrained Vector Quantization
To encode a source vector a constrained codebook is used. In this case the codevector is an approximation of the nearest neighbor vector resulting in a suboptimal encoding.
Predictive Vector Quantization The encoder makes a prediction of the incoming vector based on previously encoded vectors. The difference between the source vector and the predictor is formed and encoded.
Finite State Vector Quantization Each state represents a separate vector quantization with its own codebook.
Further vector quantization methods with memory are Entropy Vector Quantization, Tree and Trellis Vector Quantization and Adaptive Vector Quantization.
A description of the above methods can be found by Gersho and Gray [2], Gray [4] and Nasrabadi and King [3]