36_

Page 1

Proc. of Int. Conf. on Control, Communication and Power Engineering 2010

Mean-Removed Constrained Vector Quantization as Applied to Image Sneha A. Pandya1, Arun B. Nandurbarkar2, Prof. S. M. Joshi3 1

Department of Electronics & Communication Engineering, VVP Engineering College of Technology – Vajdi Virda, Rajkot - Gujarat, India 360005 E-mail: sneha_80@yahoo.com Ph: +91 0281 2783394 2 Department of Electronics & Communication Engineering, government engineering college – Bharuch, Gujarat, India 392012 E-mail: arunnandurbarkar@yahoo.com Ph: +91 02642 227054. 3 Department ofElectrical Engineering in Faculty of Engineering & Technology, M.S. University, Baroda., India 390002 E-mail: sri_smj@yahoo.com Abstract—Vector quantization (VQ) is a powerful method based on the principle of block coding, for lossy compression of data such as sound or image. Vector quantization is used with sophisticated digital signal processing, where in most cases the input signal already has some form of digital representation and the desired output is a compressed version of the original signal. VQ is usually used for the purpose of data compression. VQ exploits the statistical redundancy between pixels in image within a block and between segments in audio stream to reduce the bit rate. So for the given bit rate the codebook search complexity increases exponentially with the increase in block size. There always exists a high correlation among neighboring blocks. This inner block correlation can be exploited by incorporating the memory into the VQ. VQ can be viewed as a form of pattern recognition where an input pattern is “approximated” by one of a predetermined set of standard patterns. Direct use of VQ or the above mentioned un-constrained VQ is severely limited to rather modest vector dimension and codebook sizes for practical problems. Reducing the dimension of a vector often sacrifices the possibility of effectively exploiting the statistical dependency known to exist in a set of samples or signal parameters. By applying constraint to the structure of the VQ codebook, there can be a correspondingly altered algorithm and design technique. The method of mean-removed constrained VQ method generally compromise the performance achievable with unconstrained VQ, but often provide very useful and favorable trade-offs between performance and complexity. Keywords – GLA algorithm, MSE, PSNR, Code book optimization, Mean – removed VQ.

I. INTRODUCTION With scalar quantization, there exists one-to-one mapping between the samples. To the given data, scalar quantization is uniformly applied and that does not target the use of correlation existing in the data. From a multidimensional point of view, using a scalar quantizer for each input restricts the output points to a rectangular grid [2]. Observing several source output values at once allows us to move the output points around. Another way of looking at this is that in one dimension the quantization intervals are restricted to be intervals, and

© 2009 ACEEE

the only parameter that we can manipulate is the size of these intervals. When we divide the input into vectors of some length n, the quantization regions are no longer restricted to be rectangles or squares. We have the freedom to divide the range of the inputs in an infinite number of ways. Following figures 1, 2, 3 & 4 demonstrates these conclusions. With Fig 1 we have the image of Lena, for which the distribution of gray scales is given in fig 2. fig 3 & 4 represents the application of scalar and vector quantization on the same respectively.

Fig. 1 Lena.jpg

Fig. 3 Scalar quantization

Fig. 2 Distribution of pixels

Fig. 4 Vector quantization

II BASICS OF VECTOR QUANTIZATION A. Definition & some important terms A vector quantizer Q of dimension k and size N is a mapping from a vector (or a “point”) in k-dimension Euclidean space, Rk, into a finite set C containing N output or reproduction points Called code vectors or codewords. Thus, Q : Rk ᆴ C

where C={y1, y2... . yN} is the set of N reconstruction vectors, called a codebook of size N , meaning it has N distinct elements, each a vector in R k. Where, yi ∈ Rk for each, i ∈ J ≡ { 1, 2, - - -, N}. Each yi Є C is called a code vector. C = number of code words in the codebook, i.e. the size of the codebook. N = number of input vectors obtained from the input image.[1]

296


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.