2.1.1 Exhaustive Search
Algorithm
given
dmin2
= ¥
imin = NIL
for i = 1 to N
The complexity can be easily derived from the algorithm. In the following
table, the first column are the values for the above algorithm, the second
column are the theoretical values for the mean squared error. So for a
codebook with N codevectors of dimension k:
number of additions S per vector | 3Nk+N | (2k-1)N |
number of multiplication P per vector | Nk | Nk |
number of comparisons C per vector | Nk+2N | N-1 |
total | (5k+3)N | 3Nk-1 |
2.1.2 Partial Distance
Search Algorithm
given
dmin2
= ¥
imin = NIL
for i = 1 to N
number of additions S per vector | 2(k+(N-1)kC) | (2k-1)N |
number of multiplication P per vector | k+(N-1)kC | Nk |
number of comparisons C per vector | k+(N-1)kC | N-1 |
total | 4(k+(N-1)kC) | 3Nk-1 |
2.2.1 Triangular Inequality
Elimination
The variety of triangular elimination methods permits only a general description, which can be subdivided in the steps approximation, elimination and search. Following is this illustrated for the above-described method.
given
precalculation
approximation
elimination
search
fixed part | variable part | |
Number of additions S per vector | 3k-2 | (2k-1)NS |
Number of multiplication P per vector | 2k | NSk |
number of comparisons C per vector | N+log2(N) | NS-1 |
total | N+5k-2+log2(N) | 3NSk-1 |
2.2.2 Vector-to-Scalar
Mapping Elimination
Like above the variety of vector-to-scalar mapping elimination methods permits only a general description, which can be subdivided in the steps mapping and search. Following this is illustrated by the above-described method.
given
precalculation
search
Again in view of the general description the complexity can only be estimated. We have seen that the vector-to-scalar elimination method contributes to the complexity a fix and a variable part. There the fix part includes the mapping and the variable part includes the neighborhood search.
fixed part | variable part | |
number of additions S per vector | 3k-2 | (2k-1)NM |
number of multiplication P per vector | 2k | NMk |
number of comparisons C per vector | log2(N) | NM-1 |
total | 5k-2+log2(N) | 3NMk-1 |
Theoretically the above methods are optimal, but for the practical applications often a suboptimal encoding is sufficient. When to reduce further the computational complexity, the search range in the codebook is limited. There the size of this range depends of the desired resulting quality degradation.
We compare now the different methods by their theoretically attainable
inferior limit of complexity. To emphasize the gain in complexity reduction
by passing from one method to another, there are also stated some experimental
results found in the different published reports. These values are only
to illustrate the range of the possible attainable complexities, so we
omit a citation for each source, but the values can be found in the mentioned
proceedings.
For the exhaustive search the complexity is dominated by the size of the codebook N. So choosing a larger codebook results immediately in an increase of the number of operations per pixel. We obtain for N = 512 and k = 16 1536 operations. The partial distance search algorithm has a non-linear increase of complexity, who is considerably more important for large a codebooksize and a small dimension. The triangular inequality elimination method has the same shape but reduced by a factor ten. For the vector-to-scalar elimination method we still have the same shape but again reduced by a factor ten compared to the triangular inequality elimination method.
The above graphs are the inferior limits of complexity for the last three methods; therefore they can only be regarded as a coarse estimation. For practical applications we have always additional complexity in form of the exhaustive search algorithm. How the exhaustive search graph has to be weighted and to be superposed, is up to now not known. But it would be different for each method and more important for the partial distortion search algorithm than for the two fast search methods. In the next chapter we try to find such a weighting function for the partial search algorithm.