Hybrid Compression of Medical Images based on Huffman and LPC for Telemedicine Application

Page 1

IJIRST 窶的nternational Journal for Innovative Research in Science & Technology| Volume 1 | Issue 6 | November 2014 ISSN (online): 2349-6010

Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application Neelesh Kumar Sahu Chandrashekhar Kamargaonkar Assistant Professor(SSEC) Associate Professor(SSGI) Department of Electronics and Telecommunication Department of Electronics and Telecommunication Shri Shankaracharya Technical Campus, Bhilai, India Shri Shankaracharya Technical Campus, Bhilai, India Dr. Monisha Sharma Professor Department of Electronics and Telecommunication Shri Shankaracharya Technical Campus, Bhilai, India

Abstract The demand for handling images in digital form has increased dramatically in recent years. Image compression is an effective technique of reducing or less the amount of image information that are required to represent/show an image in better format ,after compression image size get reduced.The objective of image compression is to reduce the amount of digital images information and therefore reduce the price, storage capacity as well as transmission cost. Image compression performs a key role in various important applications, like image database, image digitization, security industry,health industry etc.This project presents a procedure of employing both methods of compression in brilliant manner to achieve effective compression ratio and less error rate. In this proposed method we are merging the Huffman encoding technique along with LPC for the enhancement of compression ratio. First of all, an medical (MRI) image is separated into two parts, the ROI (Region of interest) and the NROI (Non ROI); then, the two sections are coded individually based on Huffman and LPC. Here, Huffman will provide the tree based encoding scheme for lossless compressionof imageand LPC method is used for lossy compression The experimental results shows thatbetter Signal to Noise Ratio (SNR) with acceptable Compression Ratio (CR) has been achieved using hybrid scheme based on Huffman and LPC ,the algorithm also has better robustness. Keywords: Huffman, LPC, Lossy, Lossless, Compression. _______________________________________________________________________________________________________

I. INTRODUCTION A. Lossless v/s Lossy compression technique In lossless compression techniques, the decompressed image, after compression, is numerically similar in every respect to the original image, however lossless compression can only reach a modest amount of compression. An medical image reconstructed followed by lossy compression method contains degradation relative to the original. Often this is due to, the compression scheme totally discards redundant information. However, lossy schemes are able to achieve much higher compression ratio. Under normal viewing conditions, no visible loss is perceived. B. Need of Image Compression Medical Images transmitted over the internet are an excellent example of why data compression is important. Let we need to download a digitized color image over a computer's 43.7 kbps modem. If the medical image is not compressed ,it will contain about 700 kilo bytes of data. Whenever , this image has been compressed with the help of a lossless scheme, it will be about onehalf in size, or 350 Kbytes. Instead lossless, when lossy compression method has been used ( like a JPEG file), it will be about 100 Kbytes in size. The download period for these three equivalent files are 152 seconds, 81 seconds, and 17 seconds, respectively which is a big difference .JPEG technique is the best option for digital images, but GIF method is used with drawn photographs, such as industry logos that have big areas of a single color. In hybrid compression schemes, medical image is partitioned into diagnostic and non-diagnostic regions. The diagnostic part is termed as Region of Interest (ROI) and non-diagnostic part as non-ROI. Lossless and lossy compression technique is applied on ROI and non-ROI parts respectively. This results in accurate reconstruction of ROI without any loss of information. Consequently, the overall compressed medical image can have higher PSNR in comparison to lossy methods and better CR in comparison to lossless techniques.

All rights reserved by www.ijirst.org

249


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044)

II. AN OVERVIEW A. Huffman coding a Huffman code is an optimal prefix code found using the algorithm introduced by David A. Huffman while he was a Ph.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes". The process of finding and/or using such a code is known Huffman coding and is a common technique in entropy encoding, including in lossless medical image compression. The algorithm's outcome can be viewed as a variable-length code table for encoding a source symbol. Huffman's algorithm derives this table based on the estimated probability or frequency of occurrence (weight) for each possible value of the source symbol. Huffman’s algorithm is based on the idea that a variable length code should use the shortest code words for the most likely symbols and the longest code words for the least likely symbols. In this manner, the mean code length will be reduced. This algorithm allocates code words to symbols by constructing a binary coding tree. Each and every symbol of the alphabet is a leaf of the coding tree. The code of a given symbol corresponds to the unique path from the root to that leaf, with 0 or 1 added to the code for each edge along the path depending on whether the left or right child of a given node occurs next along the path. The definition of the information entropy is, however, quite simple, and is expressed in terms of a discrete set of probabilities pi so that.

Where n= Number of different outcomes i= Total information from n outcomes P(Xi)= Probability of the event happening  Advantages: (1) The algorithm is simple. (2) Huffman Coding is optimal according to information theory when the probability of each input symbol is a negative power of two.  Limitations: (1) Huffman is developed to code single characters only. Therefore at least one bit is needed per character, e.g. a word of 8 characters requires at least an 8 bit code. (2) It is not suitable for strings of different length or changing probabilities for characters in a different context.

All rights reserved by www.ijirst.org

250


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044)

Fig. 1: Huffman algorithm flow chart

B. Predictive Coding: Predictive coding is an image compression technique which uses a compact model of an image to predict pixel values of an image based on the values of neighboring pixels. A model of an image is a function model(x; y), which computes (predicts) the pixel value at coordinate (x; y) of an image, given the values of some neighbors of pixel (x; y), where neighbors are pixels whose values are known. Typically, when processing a medical image in raster scan order (left to right, top to bottom), neighbors are selected from the pixels above and to the left of the current pixel. For example, a common set of neighbors used for predictive coding is the set f(x-1,y-1), (x,y-1), (x+1,y-1),(x-1,y)g. Linear predictive coding is a simple, special case of predictive coding in which the model simply takes an average of the neighboring values.

Fig. 2: Block diagram of Predictive coding

Suppose that we have a perfect model of an image, i.e., one which can perfectly reconstruct an image given the pixel value of the border pixels (assuming we process the pixels in raster order). Then, the value of the border pixels and this compact model is all that needs to be transmitted in order to transmit the whole information content of the image. In general, it is not possible to

All rights reserved by www.ijirst.org

251


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044)

generate a compact, perfect model of an image, and the model generates an error signal (the differences at each pixel between the value predicted by the model and the actual value of the pixel in the original image. There are two expected sources of compression in predictive coding based image compression (assuming that the predictive model is accurate enough). First, the error signal for each pixel should have a smaller magnitude than the corresponding pixel in the original image (therefore requiring fewer bits to transmit the error signal). Second, the error signal should have less entropy than the original message, since the model should remove many of the “principal components” of the image signal. To complete the compression, the error signal is compressed using an entropy coding algorithm such as arithmetic coding. If we transmit this compressed/encoded error signal as well as the model and all other peripheral information, then a receiver can reconstruct (decompress) the original medical image by applying an analogous decoding procedure. III.

THE PROPOSED METHOD

Fig. 3: Proposed Hybrid method

A. Image Features Extraction Texture is characterized by the spatial distribution of gray levels in a neighborhood. A medical image region has a uniform texture, when a set of its local properties in that area is constant, gradually changing or approximately periodic. Texture study is one of the principal techniques used in analysis. There are 3 primary steps in texture analysis: classification, segmentation and shape retrieval from texture. Analysis of texture requires the identification of proper attributes or features that differentiate the textures of the image. In this paper, texture segmentation is carried out by comparing co-occurrence matrix features Contrast and Energy of size N × N derived from discrete wavelet transform overlapping but adjacent sub images Ci,j of size 4 × 4, both horizontally and vertically. The algorithm of image features extraction involves. (1) Dcomposition, using one level DWT with the Haar transform, of each sub image Ci,j of size 4 × 4 taken from the top left corner. (2) Computation of the co-occurrence matrix features energy and contrast given in Eqs (5) and (6) from the detail coefficients, obtained from each sub image Ci,j (3) Forming new feature matrices

∑ ∑

All rights reserved by www.ijirst.org

252


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044)

B. The Image Region of Interest Extraction The Region of Interest (ROI) plays a major role in the image. It is the best performance part of the image content. Same image may have different ROI because of the different demands and backgrounds of the examiner. In abdominal medical images, the doctor in the disease diagnosis is often only concerned with liver area, hence, the region of interest(ROI) extraction technique segmentation of liver from the abdominal medical image, and carries on the lossless coding. Thispaper is using the manual extraction of region of interest extraction on the liver. C. Image ROI Coding Based on Huffman The Huffman code algorithm first creates a source code sequence According to the sequence of probability. Then combine the two minimum probability of symbol for a new symbol and add itto the next reduction. Huffman coding method is a set of symbol probability to produce the best code, but must satisfy one of a symbol encoding conditions. In the code generation, decodingonly simple look-up table can be completed. Huffman coding process can produce the best code according to the probabilistic manner. After code creation, decompression can be completed only by look-up table. D. Huffman Compression for R, G and B: In the proposed compression method before applying Huffman compression, segmentation is applied as it will give ROI & NROI region. In segmentation, compression is achieved by compressing a range of values to a individual quantum value. When the given number of distinct symbols in a given stream is minimized, the stream becomes more compressible. After the image is quantized, Huffman compression is applied. The Huffman scheme has used a variable-length code table for the encoding of each character of an medical image where the variable-length code table is derived from the estimated probability of occurrence for each possible value of the source symbol. Huffman has used a particular method for choosing the representation for each symbol which has resulted in a prefix codes. These prefix codes indicates the most common source symbols using small strings of bits than are used for less common source symbols. In this manner, we have find a compressed image. E. Development of Huffman Coding and Decoding Algorithm Step1- Read the MRI image on to the workspace of the MATLAB software. Step2- Transform the given colour image into grey level image. Step3- Call a function which will find the symbols (i.e.pixel value which is non-repeated). Step4- Call a function that will calculate the probability of every symbol. Step5- Probability of symbols are arranged in descending order and lower probabilities are merged and this step is continued until only two probabilities are left and codes are assigned according to rule that ,the highest probable symbol will have a shorter length code. Step6-Huffman encoding is performed i.e. mapping of the code words to the corresponding symbols will result in a compressed data. Step7- The original image is reconstructed i.e. decoding is done by using Huffman decoding. Step8- Create a tree identical to the encoding tree. Step9- Read input, character wise and left to the table II till the final (last) element is reached in the table II. Step10-Output the character encode in the leaf and go back to the root, and continue the step 9 till all the codes of corresponding symbols are known. It is an entropy-based technique that relies on an analysis of the frequency of symbols in a string. Huffman coding can be demonstrated most vividly by compressing a raster image. Suppose we have a 5Ă—5 raster medical image with 8-bit colors, i.e. 256 distinct colors. The uncompressed medical image will take 5 x 5 x 8 = 200 bits for storage.

First, we count up how many times each color occurs in the image. Then we categorize the colors in descending order of frequency. After sorting a row that looks like this:

Now we put the colors together by building a tree such that the colors farthest from the root are the least frequent. The colors are combined in couples, with a node forming the connection. A node can be connected either to another node or to a color. In this example, the tree might look like this:

All rights reserved by www.ijirst.org

253


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044)

Our result is known as a Huffman tree. This tree can be used for encoding and decoding of images. Each color will be encoded as follows: I create codes by moving from the root of the tree to each and every color. If we move right at a node, we write a 1, and if we turn left – 0. This process yields a Huffman code table in which each symbol is assigned a bit code such that the most frequently occurring symbol has the shortest code, while the least repeated(common) symbol is given the longest code.

The most frequently occurring color, white, will be represented by just a single bit rather than 8 bits. Black will be represented by two bits. Red and blue will take three bits each. After these replacements are made, the 200-bit medical image will be compressed to 14 x 1 + 6 x 2 + 3 x 3 + 2 x 3 = 41 bits, which is about 5 bytes compared to 25 bytes in the original medical image. Of course, to decode (decompress) the medical image, compressed file must include the code table, which takes up some space. Each bit code derived from the Huffman tree unambiguously identifies a color, hence the compression loses no information. Table - 1 Huffman Source Reduction

Table - 2 Huffman Codeword Assignment

F. NROI Coding Based on LPC Predictive coding is based on eliminating the redundancies of closely spaced pixels – in space and/or in time – by extracting and coding only the new information in each pixel. The new information is defined as the difference between the actual and predicted value of the pixel.

All rights reserved by www.ijirst.org

254


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044)

Fig. 4: Predictive coding technique

In predictive coding system, Predictor generates the expected value of each sample based on a specified number of past samples. Predictor’s output is rounded to the nearest integer and is used to compute prediction error

̂

Prediction error is encoded by a variable-length code to generate the next element of the encoded data stream. The decoder reconstructs e(n) from the encoded data and performs the inverse operation

̂

Various local global and adaptive methods can be used to generate fˆ(n) Often, the prediction is a linear combination of m previous samples:

̂

[∑

]

Where m is the order of the linear predictor and αi, i = 1,…m is prediction coefficients, f(n) are input pixels. The m samples used for prediction can be taken from the current scan line (1D linear predictive coding – LPC), from the current and previous line or from the current image and the previous images in an image sequence. The 1D LPC:

̂

[∑

]

which is a function of the previous pixels in the current line. Note that the prediction cannot be formed for the first m pixels. These pixels are coded by other means. The compression achieved in predictive coding is related to the entropy reduction resulting from mapping an input image into a prediction error sequence. Hence, the pdf of the prediction error is highly peaked at 0 and has relatively small (compared to the input image) variance. It is often modeled by a zero-mean uncorrelated Laplacian pdf: √ | |

√ where

e=

standard deviation of e.

IV. EXPERIMENTAL RESULT We have applied the hybrid image compression scheme on many images and the results are shown in this section. Take a medical image of size 256x256 pixels. The input image is resized to 16x16 pixels. Then hybrid image compression technique is applied. The following graph shows the results in terms of CR, PSNR, MSSIM and ERMS.

All rights reserved by www.ijirst.org

255


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044)

Fig. 5: Comparison between CR & PSNR

A. Comparison With other Techniques Below table represents that the comparison between SPIHT and Huffman (entropy coding). The Huffman is more efficient than the SPIHT in terms of CR and PSNR. IMAGES IMAGE 1 IMAGE 2 IMAGE 3 IMAGE 4 IMAGE 5

PSNR(SPIHT) 26.7400 21.4100 21.2600 28.9500 40.0400

PSNR(HUFFMAN) 56.0100 61.8320 62.9700 78.5900 65.1300

Fig. 6: PSNR Comparison of SPIHT & Huffman

All rights reserved by www.ijirst.org

256


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044)

Fig. 7: Comparison between ERMS & MSSIM

Fig. 8: (a) Original Image

Fig. 9: Segmentation: (b) Decomposed

Fig. 10: Region Extraction: (d) ROI

(c) Segmented

(e) NROI

Fig. 11: Final Output: (f) Compressed Hybrid Image

V. CONCLUSION AND FUTURE WORK In this proposed paper we are using a hybrid Huffman and LPC algorithm for ROI based medical image compression. The experimental results shows that using Huffman and LPC, the proposed ROI encoding technique provides high compression ratio, PSNR and quality ROI. There are various possible directions for future investigation. In order to get good compression ratio, experiments can be done on various hybrids of lossless and lossy compression techniques.

REFERENCES [1] [2] [3] [4] [5] [6]

Shaou-Gang Miaou,, Fu-Sheng Ke, and Shu-Ching Chen,“A Lossless Compression Method for Medical Image Sequences Using JPEG-LS and Interframe Coding”, IEEE Transactions On Information Technology In Biomedicine, Vol. 13, No. 5, September 2009 Rafeef Abugharbieh, Victor Sanchez, and PanosNasiopoulos,“3-D Scalable Medical Image Compression With Optimized Volume of Interest Coding”, IEEE Transactions On Medical Imaging, Vol. 29, No. 10, October 2010 Rick A. Vander Kam, Ping WahWong,and Robert M. Gray,“JPEG-Compliant Perceptual Coding fora Grayscale Image Printing Pipeline”,IEEE Transactions On Image Processing, Vol. 8, No. 1, January 1999 Victor Sanchez,PanosNasiopoulos,and Rafeef Abugharbieh,“Novel Lossless fMRI Image Compression Based on Motion Compensation and Customized Entropy Coding”,IEEE Transactions On Information Technology In Biomedicine, Vol. 13, No. 4, July 2009 Sheila S. Hemami,“Robust Image Transmission Using Resynchronizing Variable-Length Codes and Error Concealment”, IEEE Journal On Selected Areas In Communications, Vol. 18, No. 6, June 2000 Janaki. R and Dr.Tamilarasi, “A,Still Image Compression by Combining EZW Encoding with Huffman Encoder”, International Journal of Computer Applications (0975 – 8887) Volume 13– No.7, January 2011

All rights reserved by www.ijirst.org

257


Hybrid Compression of Medical Images Based on Huffman and LPC For Telemedicine Application (IJIRST/ Volume 1 / Issue 6 / 044) [7] [8] [9] [10] [11] [12]

[13] [14] [15] [16] [17]

David Wu, Damian M. Tan, Marilyn Baird, John DeCampo, Chris White, and Hong Ren Wu, “Perceptually Lossless Medical Image Coding”, IEEE Transactions On Medical Imaging, Vol. 25, No. 3, March 2006 A. Sivanantha Raja, D. Venugopal and S. Navaneethan, “An Efficient Coloured Medical Image Compression Scheme using Curvelet Transform”, European Journal of Scientific Research ISSN 1450-216X Vol.80 No.3 (2012), pp.416-422 Marykutty Cyriac and Chellamuthu C., “A Novel Visually Lossless Spatial Domain Approach for Medical Image Compression”, European Journal of Scientific Research ISSN 1450-216X Vol.71 No.3 (2012), pp. 347-351 M.Ferni Ukrit ,A.Umamageswari ,Dr.G.R.Suresh , “A Survey on Lossless Compression for Medical Images ” International Journal of Computer Applications (0975 – 8887) Volume 31– No.8, October 2011 Aleksej Avramovic and Slavica Savic “Lossless Predictive Compression of Medical Images”, Serbian Journal Of Electrical Engineering Vol. 8, No. 1, February 2011, 27-36 Mrs.S.Sridevi M.E (Ph.D), Dr.V.R.Vijayakuymar and Ms.R.Anuja, “A Survey on Various Compression Methods for Medical Images”, I.J. Intelligent Systems and Applications, 2012, 3, 13-19 Published Online April 2012 in MECS Balpreet Kaur,Deepak Aggarwal, and Gurpreet Kaur, 4Amandeep Kaur’ “Efficient Image Compression based on Region of Interest”, IJCST Vol. 2, Issue 1, March 2011 ISSN:2229-4333( Print) ISSN:0976- 8 491 Pasumpon Pandian, A. and S.N. Sivanandam, “Hybrid Algorithm for Lossless Image Compression using Simple Selective Scan order with Bit Plane Slicing”, Journal of Computer Science 8 (8): 1338-1345, 2012 ISSN 1549-3636 Harjeetpal singh and Sakhi Sharma, “Hybrid Image Compression Using DWT, DCT & Huffman Encoding Techniques”, International Journal of Emerging Technology and Advanced Engineering, ISSN 2250-2459, Volume 2, Issue 10, October 2012 M.Varun Kumar and S.Venkata Krishnan, M.Prabhakaran, A.K.Abirami ,“Spatial based hybrid Image Compression for Telemedicine Application”, International Journal of Advanced Research in Computer Science and Software Engineering, ISSN: 2277 128X, Volume 3, Issue 11, November 2013 Deepak.S.Thomas, M.Moorthi and R.Muthalagu,” Medical Image Compression Based OnAutomated Roi Selection For Telemedicine Application”, International Journal Of Engineering And Computer Science, ISSN:2319-7242, Volume 3 Issue 1, Jan 2014 Page No. 3638-3642

All rights reserved by www.ijirst.org

258


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.