Ijeee v1i5 01

Page 1

IJEEE, Vol. 1, Issue 5 (October, 2014)

e-ISSN: 1694-2310 | p-ISSN: 1694-2426

QUALITY ASSESMENT OF GRAY AND COLOR IMAGES THROUGH IMAGE FUSION TECHNIQUE 1

Ankita Sharma, 2Rekha sharma

1,2

Swami Devidyal Institute of Engineering & Technology, Haryana, India 1

sharmaankitaamb@gmail.com

ABSTRACT: Image enhancement can improve the perception of information. It is observed that the results of various image enhancement techniques produces output images which are complementary to the input images. Image fusion is an emerging trend in the digital image processing to enhance images. In image fusion two or more images can be fused (combined) to obtain an enhanced image. In the present work image fusion technology has been used to enhance a given input image. Image fusion is used here to combine two images which contains complementary information. Conventional histogram equalization is used to enhance the contrast of the image and hence it provides the complementary information not present in the original image. The original image and the histogram equalized images are then fused together to get the enhanced image. The fusion algorithm used is fully automatic and can be used to enhance both grayscale and color images. Subjective and/or objective analysis is carried out to check the performance of fusion algorithm. Index Terms: Digital image processing, Image Enhancement, Image Fusion, Histogram equalization 1. INTRODUCTION 1.1 Image Enhancement Image enhancement is among the simplest and most appealing areas of digital image processing. The fundamental goal of image enhancement is to process the input image in such a way that the output image is more suitable [1] for interpretation by the humans as well as by machines. The process of image enhancement is application specific, thereby meaning that a method which is suitable for enhancing images for one type of application might not be suitable for other. There are numerous available techniques in the literature that can use for image enhancement. Improvement in quality of these degraded images can be achieved by using application of enhancement technique. Image enhancement techniques can be broadly classified into two categories:  Spatial domain  Frequency domain 1.1.1 Spatial Domain www.ijeee-apm.com

Spatial domain methods directly process the pixels of an input image. An expression for spatial domain processing is given by the equation shown below: g(x, y)=T[f(x, y)] (1) Here, f(x, y) is the original image, g(x, y) is the processed image and T is an operator over neighborhood of (x, y).The principal approach in defining a neighborhood about a point (x, y) is to use a square or rectangular sub image area centered at (x, y).The center of the sub image is moved from pixel to pixel. The operator T is applied at each location to yield the output at that location. The process utilizes only the pixels in the area spanned by the neighborhood. The smallest possible size of the neighborhood is 1 × 1. Then depending upon the area selected, a larger matrix of higher order is formed. In this case spatial domain processing is called intensity transformation. There are numerous techniques available in the literature for image enhancement depending on the specific application. Contrast enhancement by histogram equalization is one such technique. 1.1.2 Frequency Domain Frequency domain image enhancement involves modifying the Fourier transform of the image [1,2]. In frequency domain methods the original input image is first transformed into frequency domain using 2D Fourier transforms. The image is then processed in frequency domain. Finally, the output image is obtained using 2D inverse Fourier transforms. The block diagram showing the main steps in frequency domain image processing is shown below:

Fig1: Frequency domain image processing

International Journal of Electrical & Electronics Engineering

1


2. HISTOGRAM EQUALIZATION Histogram equalization is an important image enhancement technique commonly used for contrast enhancement. The histogram equalization technique is used to stretch the histogram of the given image. Greater is the histogram stretch greater is the contrast of the image. In other words if the contrast of the image is to be increased then it means the histogram distribution of the corresponding image needs to be widened. Histogram equalization is the most widely used enhancement technique in digital image processing because of its simplicity and elegancy. In an image processing context, the histogram of an image normally refers to a histogram of the pixel intensity values. This histogram is a graph showing the number of pixels in an image at each different intensity value found in that image. For an 8-bit grayscale image there are 256 different possible intensities, and so the histogram will graphically display 256 numbers showing the distribution of pixels amongst those grayscale values. Histograms can also be taken of color images either individual histogram of red, green and blue channels can be taken, or a 3-D histogram can be produced, with the three axes representing the red, blue and green channels, and brightness at each point representing the pixel count. The exact output from the operation depends upon the implementation, it may simply be a picture of the required histogram in a suitable image format, or it may be a data file of some sort representing the histogram statistics. The operation of HE is performed by remapping the gray levels of the image based on the probability distribution of the input gray levels. It flattens and stretches the dynamic range of the image’s histogram and resulting in overall contrast enhancement. Contrast enhancement by histogram equalization is one such technique. 3. IMAGE FUSION Data fusion is a process dealing with data and information from multiple sources to achieve refined/improved information for decision making. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm [3-5]. In data fusion the information of a specific scene acquired by two or more sensors at the same time or separate times is combined to generate an interpretation of the scene not obtainable from a single sensor. Image fusion is a component of data fusion when data type is strict to image format. Image fusion is an effective way for optimum utilization of large volumes of image from multiple sources[6-8]. Multiple image fusion seeks to combine information from multiple sources to achieve inferences that are not feasible from a single sensor or source. It is the aim of image fusion to integrate different data in order to obtain more information than can be derived from each of the single sensor data alone.

International Journal of Electrical & Electronics Engineering

,2

Image fusion can be performed roughly at four different stages: signal level, pixel level, feature level, and decision level. In signal level fusion the main idea is to improve the signal to noise ratio by combining the information from different sources. In pixel level fusion the pixel set from all the source images is fused [9]. This process is repeated for all the pixels. In feature level fusion salient features are extracted from a given set of images and then these features are fused together. Finally decision level fusion involves the extraction of information from the given set of images. The extracted information is then combined using decision rules. For image fusion to take place the set of source images needs to be registered i.e. the images needs to be aligned spatially An expression to obtain a fused image with two source images is given as If(x, y)=w*I1(x,y)+(1-w)*I2(x, y) (2) Here I1 and I2 are the two images. Specifically only one image is used and the second image is obtained using conventional histogram equalization on the source image. The two images are then fused using the above equation. Here, the value of w needs to be entered manually. The advantage of using this algorithm is that it is simple to implement both in hardware and software but one disadvantage could be that value of w needs to be manually entered for getting good results. 4. GRADIENTS For a function f(x, y), the gradient of f at (x, y) is defined as the two dimensional column vector   f    x   G x    f      (3)   f   G y    y  The magnitude of this vector is given by

 f  2  f  2   f          x    y  

1/ 2

(4)

The first order derivative of a one dimensional function f(x) is the difference f  f ( x  1)  f ( x ) x (5) For color images following equations are used for calculating the gradient and hence the edges of the image:

 R   G   B  g xx         dx   dx   dx  2

2

2

g yy

2

2

 R   G   B         dy   dy   dy 

(6) 2

(7)

www.ijeee-apm.com


g xy  

R R G G B B   dx dy dx dy dx dy

1 ta n 2

 1

 2 g xy  g xx  g 

yy

  

1  F()  gxx gyy gxx gyy cos22gxy sin2 2  

(8)

(9)

1/2

(10)

Here gxx, gyy, gxy are the gradients in x, y and x-y directions respectively and θ is the direction of maximum gradient. F(θ) represents the gradient of the color image. 5. DESIGN OF WEIGHT FUNCTION As shown in Eq.(2) for fusing the two images weight w is used. The value of w can lie in the range [0, 1]. Detail aware contrast enhancement method assigns weights manually [11]. In order to automate the process of image fusion the calculation of this weight value needs to be automated. For pixel-by-pixel fusion of the two images this weight value must be calculated for each possible pixel location in the images. Based on the strength of edges in the two source images two cases are possible: 1. If S1(x, y) > S2(x, y) i.e. edge at location (x, y) in image I1 is greater than the edge at location (x, y) in I2. In this case the pixel in image I1 i.e. I1(x, y) must be given the higher weightage and I2(x, y) the lower weightage. 2. If S2(x, y) > S1(x, y) i.e. edge at location (x, y) in image I2 is greater than the edge at location (x, y)in I1. In this case the pixel in image I2 i.e. I2(x, y) must be given the higher weightage and I1(x, y) the lower weightage. For implementing the above two cases we use equation of the form w 

  S. 

Calculate magnitude gradient of I1 using Sobel operator to get the edge map S1. Calculate magnitude gradient of I2 using Sobel operator to get the edge map S2. Calculate S = S1 – S2. Calculate the absolute maximum value of matrix

Normalize the matrix S with the absolute maximum value. For each pixel located at position (x, y) repeat the following steps: 

Calculate w 

1 . 1  10  S ( x , y )

This function is used for weighting each pixel based on the strength of edges of I1 and I2. If S1(x, y) > S2(x, y), then I1(x, y) is given the higher weightage and I2(x, y) the lower weightage. But if S2(x, y) > S1(x, y), then I2(x, y) is given the higher weightage and so on.  Use the value of w in Eq. (2) to find the fused image If(x, y).

Histogram Equalized Image (I2)

Input Image (I1)

Edge Map (S1)

Edge Map (S2) S= S1-S2

W(x,y)= 1/(1+10^ (-S(x, y))

1 where x  [ 1,1] and   1 . 1  x

For each pixel at (x, y) 6. ALGORITHMS A. First Algorithm: The first algorithm uses a simple approach of first finding the conventional histogram equalized image of the original image. After this weight w is manually selected and used in Eq.(2) to get the fused image. B. Second Algorithm: Weights can also be assigned automatically . For calculating the weights automatically first edge strengths in original source image and its histogram equalized image is calculated. After this a weight function is used to calculate the weight values based on the strength of edges. This calculation of weight values is carried pixel by pixel. The outline of the second algorithm for grayscale and color images is as follows: For Grayscale Images  Let the input image is I1.  Compute the conventional histogram equalized image I2 from I1. www.ijeee-apm.com

I1(x, y)

If(x, y)=w*I1(x,y)+

I2(x, y)

(1-w)*I2(x, y)

Output Image (If) Figure2 : Flowchart for grayscale images

For Color Images For color images the process is outlined below:  Let the input image is I1.  Compute the conventional histogram equalized image I2 from I1. For this step histogram equalization is carried out for each of the channels i.e. R, G and B channels are equalized independently and finally combined.  Calculate the gradient of I1 using Eq. (10) to get International Journal of Electrical & Electronics Engineering 3


the edge map S1.  Calculate the gradient of I2 using Eq. (10) to get the edge map S2.  Calculate S = S1 – S2.  Calculate the absolute maximum value of matrix S.  Normalize the matrix S with the absolute maximum value. For each pixel located at position (x, y) repeat the following steps: 

Calculate w 

1 1  10  S ( x , y )

Use the value of w in Eq. (2) to find the fused image If(x, y). This step is repeated for each R, G and B channel. Finally combine the R, G and B channels to get the output fused image. 7. FUSED IMAGE EVALUATION Image quality metrics are used to benchmark different image processing algorithm by comparing the objective metrics [12-13]. There are two types of metrics that is subjective and objective used to evaluate image quality. In subjective metric users rate the images based on the effect of degradation and it vary from user to user whereas objective quality metrics quantify the difference in the image due to processing. Furthermore, fusion assessment can be classified as either qualitative or quantitative in nature. Assessment of image fusion performance can be first divided into two categories : 1) Assessment Without Reference Images: In assessment without reference images, the fused images are evaluated against the original source images for similarity. a) Entropy: Entropy is defined as amount of information contained in a signal. The entropy of the image can be evaluated as:H= - ∑ ( ) log ( ( i)) (11) Where G is the number of possible gray levels, P (di) is probability of occurrence of a particular gray level di. If entropy of fused image is higher than parent image then it indicates that the fused image contains more information. b)Average Gradient: Average gradient is used for measuring the clarity of the image. More the average

International Journal of Electrical & Electronics Engineering

,4

gradient more is the clarity of the image. Average Gradient is given by the formula: M 1 N 1

AG 

  x0

g (x, y)

(12)

y0

M N

where g(x, y)=magnitude gradient at location (x, y) 2) Assessment With Reference Image: In reference-based assessment, a fSused image is evaluated against the reference image which serves as a ground truth. a)Mean Square Error (MSE):Mean square error is a measure of image quality index. The large value of mean square means that image is a poor quality. Mean square error between the reference image and the fused image is (13)

Where Ai, j and Bi, j are the image pixel value of reference image b) Root Mean Square Error (RMSE):A commonly used reference-based assessment metric is the root mean square error (RMSE) which is defined as follows:(14) Where, A is the perfect image, B is the fused image to be assessed, i is pixel row index, j is pixel column index and m, n are the no. of rows and columns. The smaller RMSE is, the better the fusion effect is. c) Structural Content (SC):The structural content measure is used to compare two images in a number of small image patches the images have in common. The large value of structural content SC means that image has poor quality. This value should be small. The SC measure is given by:(15)

d) Maximum Difference (MD): MD measure difference between any two pixels such that the larger pixel appears after the smallest pixel. The large value of maximum difference means that image is poor in quality. MD=Max(│Aij-Bij│) (16)

www.ijeee-apm.com


8. SIMULATION RESULTS  for gray image

Fig 3 (a) Original image Entropy

Performance Parameter Value

(b)Histogram equalized image Average Structural gradient content

Original image

5.4786

Original image

0.1382

Fused image

7.3249

Fused image

0.4884

9.5920

MSE

(c)Fused image RMSE Maximum Difference

1.6366

40.4543

0.0111

Table 1 

for color image

fig.4 (a) Original image Entropy

Performance Parameter Value

(b)Histogram equalized image Average gradient

Original image

6.9332

Original image

0.0292

Fused image

7.4087

Fused image

0.0420

Structural content 3.1825

(c)Fused image

MSE

RMSE

6.0282

77.6413

Maximum Difference 0.0039

Table 2 CONCLUSION Comparing the simulation results in Fig. 3 & 4 and Tables 1 & 2 one can easily find out that the fusion algorithm enhances the original image in terms of average gradient and entropy, structural content, MSE, REFERENCES [1]

[2]

R. Gonzalez and R. E. Woods, “Digital Image Processing”, 2nd edition, Prentice Hall, pp. 75-102, 134137, 572-581, Jan. 2002. P. Du, S. Liu, J. Xia and Y Zhao, “Information fusion techniques for change detection from multi-temporal

www.ijeee-apm.com

RMSE and maximum difference. It means the fusion algorithm increases the contrast, average information (entropy) and the clarity (sharpness) of the original image.

[3]

[4]

remote sensing images”, Information Fusion 14, pp. 1927, 2012. A. Saleem, A. Beghdadi and B. Boashash, “Image Fusion Based Contrast Enhancement”, EURASIP Journal on Image and Video Processing, 2012. X. Fang, J. Liu, W. Gu and Y. Tang, “A Method to Improve the Image Enhancement Result based on Image

International Journal of Electrical & Electronics Engineering

5


[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

Fusion”, International Conference on Multimedia Technology, pp. 55-58, 2011. J. Sarup, A. Singhai, “Image fusion techniques for accurate classification of Remote Sensing data”, International Journal of Geometrics and Geosciences, Volume 2, No 2, 2011. B. Khaleghi, A. Khamis, F. O. Karray, S. N. Razavi, “Multisensor data fusion: A review of the state-of-theart”, Information Fusion 14, pp. 28–44, 2011. S. Li, B. Yang and J. Hu, “Performance comparison of different multi-resolution transforms for image fusion”, Information Fusion 12, pp. 74–84, 2011. S. Krishnamoorthy and K. P. Soman, “Implementation and Comparative Study of Image Fusion Algorithms”, International Journal of Computer Applications, Vol. 9, No.2, pp. 25-35, 2010. Z. Wang, Y. Tie and Y. Liu, “Design and Implementation of Image Fusion System”, International Conference on Computer Application and System Modeling, IEEE, pp. 140-143, 2010. W. Dou, Y. Chen, X. Li and D. Z. Sui, “A general framework for component substitution image fusion: An implementation using the fast image fusion method” International Journal of Computers and Geosciences, pp. 219–228, 2007. J. Zeng, A. Sayedelahl, T. Gilmore and M. Chouikha, “Review of Image Fusion Algorithms for Unconstrained Outdoor Scenes”, 8th International Conference on signal processing, Vol. 2, 2006. V. Vijayaraj, V. Younan and N. O. Hara, “Concepts of image fusion in remote sensing Applications”, In Proceedings of IEEE International Conference on Geoscience and Remote Sensing Symposium, Denver, USA, pp.3798-3801, 2006. D. Rajan and S. Chaudhuri, “Data Fusion Techniques for Super Resolution Imaging”, Information Fusion 3, pp. 25-38, 2002. Y. Wang, Q. Chen and B. Zhang, “Image enhancement based on equal area dualistic sub-image histogram equalization method”, IEEE Trans. on Consumer Electronics, vol. 45, no. 1, pp. 68-75, Feb. 1999. David L. Hall, “An Introduction to Multisensor Data Fusion”, Proceeding of the IEEE, Vol. 85, No. 1, January 1997.

International Journal of Electrical & Electronics Engineering

,6

www.ijeee-apm.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.