Compresssor.
VCG uses
a state of the art compression algorithm developed by me, its
actually 2-in-1 algorithm and doesboth compression and encoding simultaneously
in the same phase. Compressionin the sense, i mean the transformation
of the data in to some form which can be encoded with lesser number
of bits than theoriginal and encoder is just packing of the transformed
data.
For example,
popular techniqueslike LZ, LZW & variants just transform the data
by reducing theredundancy in it, but to achive maximum compression you need
some encoderlike arithmatic coding or huffman coding to pack it effectively.
On the contrary,
compression techniques like PPM , DMC &various models use arithmatic
coding techniques as a partof their algorithm and they don't need a
seperate encoding phase. LZ and its variants are almost
twenty years old and they achieve lesser compression than
PPM & contextbased models, which are more recent and proved
to be the best.
Unlike
these techniques my algorithmfunctions as an encoder for evenly
distributed data and achieves more compression for unevenly distributed
data by reducing the redundancy in one portion of the data compared to other
portions. Same can be achieved by a context based model but for images
hardly there won't be any real repetitions when compared to text files
and they perform poor for images compared to otherfiles.
VCG's
compression algorithmis optimised for images and it achieves best compression
for all types of images. Since i haven't finished working on this
algorithm i can't distribute its source code now, but the source
code of other modules of this software are available for research or
educational purpose for a minimal fee.
|