Joint Multi-Focus Fusion and Bayer Image Restoration

Page 1

Scientific Journal of Information Engineering June 2015, Volume 5, Issue 3, PP.67-72

Joint Multi-Focus Fusion and Bayer Image Restoration Ling Guo, Bin Yang#, Chao Yang College of Electric Engineering, University of South China, Hengyang, 421001, China #

Email: yangbin01420@163.com

Abstract In this paper, a joint multifocus image fusion and Bayer pattern image restoration algorithm for raw images of single-sensor color imaging devices is proposed. Different from traditional fusion schemes, the raw Bayer pattern images are fused before color restoration. Therefore, the Bayer image restoration operation is only performed one time. Thus, the proposed algorithm is more efficient than traditional fusion schemes. In detail, a clarity measurement of Bayer pattern image is defined for raw Bayer pattern images, and the fusion operator is performed on superpixels which provide powerful grouping cues of local image feature. The raw images are merged with refined weight map to get the fused Bayer pattern image, which is restored by the demosaicing algorithm to get the full resolution color image. Experimental results demonstrate that the proposed algorithm can obtain better fused results with more natural appearance and fewer artifacts than the traditional algorithms. Keywords: Multi-Focus Image Fusion; Bayer Pattern; Superpixel; Demosaicing

1 INTRODUCTION Multifocus image fusion is defined as combining multiple images of the same scene with different focused position into a composite image, thus yielding a higher quality all-in-focus image [1, 2]. The algorithms based on various multi-scale transforms are the most common used method. The multi-scale transforms contain gradient pyramid [4], non-subsampled contourlet transform (NSCT) [1], and dual tree complex wavelet transform (DTCWT) [2] etc. The main idea is to merge decompositions of multiple sources into a composite representation according to some fusion rules. Then a fused image is reconstructed by an inverse transform of the composite multi-scale coefficients. The advantage of these algorithms is that they are easy to implement and efficient. The traditional fusion algorithms are directly utilized to deal with true color images or gray images. However, more and more raw images obtained by the sensors are Bayer pattern images. These sensors are a charge coupled device (CCD) array and a Bayer mask, a color filter array (CFA) for arranging red, green, and blue color filters on a square grid of photo-sensors [5,6]. Therefore, the input spectra are filtered by each sensor on the CCD array only capturing one color (red, green or blue) of the color spectra. The demosaicing technology is employed on the raw Bayer pattern images to obtain the full resolution color images. When the CCD array is not really in focus and the attained color images are utilized as the source images of image fusion, it may lead to low efficiency and artifacts in the fused result. In this paper, we fuse the Bayer pattern multi-focus images before demosaicing processing. Since demosaicing is only processed one time, this can reduce the amount of calculation and error introduced by demosaicing in many times, especially when the number of source images is large. In addition, instead of using pixel or region-based algorithms, superpixels are used to guide the fusion operation in the proposed algorithm. Superpixels can give better grouping cues of local image features than single pixels and have lower dependence on the correct segmentation than region-based algorithms. The experiments on four pair of Bayer pattern images demonstrate the improvements when compared with traditional fusion schemes.

2 THE PROPOSED FUSION SCHEME - 67 http://www.sjie.org


Without loss of generality, it assumes that there are two raw Bayer pattern images to be used to record the same scene. With different camera focus settings, two Bayer pattern images A and B are obtained from the single-chip cameras. Fig. 1 shows the schematic diagram of the proposed fusion scheme. The fusion process is accomplished by the following steps.

FusedBayer image Source Bayer image A

Source Bayer image B

Sperpixel Clarity Comparison Weight map W1

Guided filtering

W1

Watershed transform

Weight map W

2

Gray image A

Gray image B

Fuse Refined weight map

Refined weight map

W2

Result

Serving as the guidance image FIG. 1 THE SCHEMATIC DIAGRAM OF MULTI-FOCUS FUSION

Firstly, perform Gaussian filtering on the source Bayer pattern images A and B to attain the smoothed approximate gray images A and B , respectively. Since the neighboring pixels of the Bayer pattern image are sensed from different color spectra, their ranges of values are very different and the spectral edges between neighboring pixels exist. The artifacts „grids‟ make it difficult for feature extracting. It has proved that Gaussian filter can be used to smooth these „grids‟ effectively [6]. The smoothed approximate gray images are used to segment the superpixels of the source images. An example of Bayer pattern image is shown in Fig 2(a). The labeled region with rectangle in Fig. 2(a) is zoomed as Fig. 2(b) in which neighboring pixels are not successive. It can obviously observe that the grids, and gray-value gradients between neighboring pixels exist. Due to the „grids‟ artifacts in Fig. 2(a), it is not practical to segment the Bayer pattern image directly. The authors in [6] proved that a convolution with a 3  3 Gaussian kernel G in Eq. (4) can remove the spectral edges and achieve a continuous and smoothed approximate gray image from the Bayer pattern image. 0.0625 0.125 0.0625 (4) G   0.125 0.25 0.125  0.0625 0.125 0.0625 Fig. 2(c) shows the smoothed image obtained by convolution of Fig.2 (a) and Gaussian kernel in (4). From the magnified region in Fig. 2(d), one can see that the gray-value gradients are smoothed by Gaussian filtering, which validates the effectiveness of the smoothing algorithm.

(a)

(b)

(c)

(d) FIG. 2 BAYER PATTERN IMAGE AND THE SMOOTHED IMAGE ATTAINED BY GAUSSIAN FILTERING. (A) BAYER PATTERN IMAGE; (B) A CLOSE-UP VIEW OF SUB-PICTURE OF (A); (C) THE SMOOTHED IMAGE; (D) THE CLOSE-UP VIEW OF SUB-PICTURE OF (C).

Secondly, watershed transform is applied to the gray approximate images A and B to get the superpixels of the source images [7]. The benefits of superpixels in our scheme include robust focus feature representation due to larger supports of pixels. The combined gradient image obtained by selecting the maximum gradients of the source gray approximate images is used to partition image into multiple superpixels along the object edges. The close-opening morphological operator is used to smooth the gradient image and a threshold  is used to set the gradient to be zero when the gradient is lower than the threshold. This step can relieve over-segmentation and reduce the impact of noise. - 68 http://www.sjie.org


Thirdly, the source images A and B are partitioned into multiple superpixels representations using the result of second step. Then the weight maps W1 and W2 are constructed by comparing the clarity measurements of the corresponding superpixels in the two source Bayer pattern images. Since the clarity measurements can not be directly calculated from the Bayer pattern images which contain artifacts „grids‟, a novel clarity measurement of superpixel in Bayer pattern image domain is defined, which is presented in Section 3 in detail. Fourthly, The guided filtering proposed in [8] is performed on each weight map with the corresponding approximate gray images A and B serving as guidance images, respectively, to obtain the refined weight maps W1 and W2 .

  W2  f g  W2 , B 

(1)

W1 = f g W1 , A

(2)

The guided filter is used to preserve the intrinsic edge structures of A and B into the refined weight maps W1 and W2 respectively. The filtering output is a locally linear transform of the guidance image. With the help of guidance image, the filtering output is more structured and less smoothed than the input. Therefore, it is possible to transfer structure from the guidance image to the output even if the filtering input is smooth. Then the values in refined weight maps W1 and W2 are normalized such that they sum to one at each pixel position. Fifthly, the fused Bayer pattern image F is attained by the weighted averaging operator as:

F  W1  A  W2  B

(3)

Finally, perform demosaicing processing on the fused Bayer pattern image to achieve the fused color result. Any demosaicing algorithms [5] can be used in our fusion strategy.

3 THE CLARITY MEASUREMENT OF BAYER PATTERN IMAGE Many measures have been proposed to represent the clarity of an image, such as visibility, edge feature, and spatial frequency [9]. In general, these measures are used to indicate the clarity of a gray image or a RGB color image. However, the clarity of a Bayer pattern image is needed in our fusion framework. The pixels of Bayer pattern image are from three different spectral channels. So the spatial frequency of each pixel cannot be directly calculated by its neighboring patches. In this paper, the clarity measurement of superpixels is defined for Bayer pattern image D  x, y  with size M  N by extended spatial frequency. Firstly, image D  x, y  is decomposed into four components, R , B , G1 , and G 2 respectively as:

R  i, j   D  2  i  1,2  j  1 , i  1,2,..., M 2;

j  1,2,..., N 2

(5)

G1  i, j   D  2  i  1,2  j  , i  1,2,..., M 2;

j  1,2,..., N 2

(6)

G2  i, j   D  2  i,2  j  1 , i  1,2,..., M 2;

j  1,2,..., N 2

(7)

B  i, j   D  2  i,2  j  , i  1,2,..., M 2;

j  1,2,..., N 2

(8)

Then, the spatial frequency of every pixel in R component is defined in a local sliding window with radius r  2 as

SFR  i, j   RFR  i, j   CFR  i, j  2

2

(9)

where RFR  i, j  and CFR  i, j  are given by RFR  i, j   CFR  i, j  

r

1

 2r  1 1

2

r

   R(i  m, j  n)  R(i  m, j  n  1)

2

(10)

m  r n 1 r r

r

2    R(i  m, j  n)  R(i  m  1, j  n) .

 2r  12 nr m1r

(11)

The spatial frequencies of pixels in B, G1 , and G 2 are denoted as SFB 、 SFG and SFG which are obtained in the similar way. Then the clarity measurement C  i, j  of Bayer pattern image is obtained by 1

- 69 http://www.sjie.org

2


SFR i 2  1,  j 2  1 mod  i, 2   1  SF i 2  1, j 2    mod  i, 2   1  G  C  i, j    SF i 2  ,  j 2  1 mod  i, 2   0 G   mod  i, 2   0   SFB i 2 ,  j 2 1

2

and and and and

mod  j, 2   1 mod  j, 2   0 mod  j, 2   1 mod  j, 2   0

(12)

Finally, the clarity of each superpixel Pk is calculated by

CPk   C  i, j   i , j Pk

(13)

4 EXPERIMENTAL RESULTS In this section, the performance of the proposed fusion strategy is assessed, using four pairs of multi-focus images. The gradient-based demosaicing algorithm is used for the proposed algorithm and the compared algorithms. The sophisticated demosaicing algorithms would improve the fusion quality. All the experiments are implemented in Matlab 2010a and on a Pentium(R) 2.6-GHz PC with 2.00G RAM. Two objective fusion quality metrics, Q AB / F metric [10] and QW metricc [11], are used to quantitatively evaluate the fusion performances. Both metrics can express the overall fusion performance, and larger value indicates a higher quality of the fused result. The parameter threshold  used in superpixel partition step determines the superpixel sizes. In our experimental results, we can observe that the fusion results are not sensitive this parameter. Thus the value  is all set to 10. Three well-known fusion algorithms based on multi-scale transforms including NSCT [1] and DTCWT [2] are used to compare with the proposed algorithm. The main process of multi-scale transform-based fusion is performed as follows: Firstly, the Bayer pattern images are restored by the demosaicing algorithm and the restored RGB images are further transformed into YUV color space. Then, multi-scale based gray level fusion is performed on Y component. The lowpass subband coefficients and highpass subband coefficients are merged by the averaging scheme and the „abs-max‟ scheme, respectively. Three decomposition levels are used in the DTCWT, but four decomposition levels with 2, 4, 8, 16 directions from coarser to finer scale are adopted for the NSCT algorithm. For U and V, „averaging‟ scheme is used to achieve merging. Finally, the fused color image is obtained by color space inverse transformation from YUV to RGB.

FIG.3 FOUR PAIRS OF BAYER PATTERN IMAGES WITH DIFFERENT FOCUS

Fig. 3 demonstrate four pairs of Bayer pattern images correspond to “rose”, “book”, “garden”, and “office” scene respectively. For example, the last pair images show the Bayer pattern multi-focus images captured by a hand-hold camera and they are not perfectly registered due to the camera movement. Fig. 4 demonstrates the fusion results of our algorithm and the compared algorithms. The first column of Fig.4 shows the fused Bayer pattern images. The second to forth columns show the fused images using the proposed method, NSCT and DTCWT-based fusion methods, respectively. As shown in the Fig. 4 the performances of our algorithm are better than the results of other methods. For example, the “office” source Bayer pattern images focus on the near computer screen and far bookcase, - 70 http://www.sjie.org


respectively. As shown in the amplified regions of fused “office” images, the edges of the structures of the NSCT and DTCWT based image fusion are blurred in the fused images due to the misregistration. But the fused “office” image of the proposed method is more clearly than other fused results. For the fused images given in Fig. 4, the objective evaluations on the fused results of the proposed algorithm and other comparable algorithms are listed in Table 1. From Table 1, we can see that the proposed algorithm takes almost all the largest objective evaluations, which is obviously better than the other algorithms. By considering the visual performances and the objective evaluation results, we can get the conclusion that the proposed algorithm is with the highest performance.

FIG. 3 THE FUSED IMAGES ATTAINED BY DIFFERENT FUSION METHODS. TABLE 1 QUANTITATIVE ASSESSMENT OF DIFFERENT IMAGE FUSION ALGORITHMS

QW

QAB/F

Algorithms NSCT DTCWT Proposed NSCT DTCWT Proposed

rose 0.6613 0.6597 0.7049 0.8696 0.8634 0.8634

book 0.6824 0.6795 0.7363 0.8923 0.8916 0.9058

garden 0.6972 0.6891 0.7312 0.8257 0.8206 0.8586

office 0.5877 0.5798 0.6699 0.8526 0.8419 0.8538

5 CONCLUSIONS In this paper, a novel multi-focus image fusion algorithm is proposed. Different from the traditional algorithms, the proposed algorithm directly performs fusion on the source Bayer pattern images. This reduces the errors and time by the repeated demosaicing and makes high efficiency. Several pairs Bayer pattern images as well as a Bayer pattern - 71 http://www.sjie.org


image frame are used to test the proposed algorithm and the results are also compared with the multi-scale based algorithm. The experimental results validate proposed algorithm.

ACKNOWLEDGMENT This paper is supported by the National Natural Science Foundation of China (Nos. 61102108, 11247214 and 61172161), Scientific Research Fund of Hunan Provincial Education Department (Nos. 11C1101, YB2013B039 and 12A115), and the construct program of key disciplines in USC (No.NHXK04).

REFERENCES [1]

Q. Zhang, B. L. Guo. “Multi-focus Image Fusion Using the Nonsubsampled Contourlet Transform.” Signal Processing 89 (2009):1334-1346

[2]

B. Forster, D.V.D. Ville, J. Berent, D. Sage, and M. Unser. “Complex Wavelets for Extended Depth-of-field: a New Method for the Fusion of Multichannel Microscopy Images.” Microscopy Research Technique 65 (2004):33-42

[3]

E. Guest, “Image Fusion: Advances in the State of the Art.” Information Fusion 8(2007):114-118

[4]

V. Petrovic and C. Xydeas. “Gradient-based Multiresolution Image Fusion.” IEEE Transactions on Image Processing 13 (2004): 228-237

[5]

R. Ramanath, W.E. Snyder, and G.L. Bilbro. “Demosaicking Methods for Bayer Color Arrays.” Journal of Electronic Imaging 11 (2002): 306-315

[6]

J. Herwig and J. Pauli. “Spatial Gaussian Filtering of Bayer Images with Applications to Color Segmentation.” Workshop Farbbildverarbeitung 15 (2009): 19-28

[7]

D. Jayadevappa, S.S. Kumar, and D.S. Murty. “A Hybrid Segmentation Model Based on Watershed and Gradient Vector Flow for the Detection of Brain Tumor.” International Journal Signal Processing, Image Processing and Pattern Recognition 2(2009): 29-42

[8]

K. He, J. Sun, and X. Tang. “Guided Image Filtering.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35(2013): 1397-1409

[9]

S.T. Li, J.T. Kwok, and Y.N. Wang. “Combination of Images with Diverse Focuses Using the Spatial Frequency.” Information Fusion 2(2001):169-176

[10] C. Xydeas and V. Petrovic. “Objective Image Fusion Performance Measure.” Electronic Letter 36(2000): 308-309 [11] G. Piella and H. Heijmans “A New Quality Metric for Image Fusion.” In Proceedings of International Conference on Image Processing (3) (2003):173-176

AUTHORS 1

L. Guo received her B.E. degree in

Electrical

fusion.

&

Communications

2

Bin Yang received his Ph.D. degrees in control science and

from

engineering from Hunan University of electrical engineering

University of South China in 2012. She is

Changsha, China, in 2010. He is currently an associate professor

currently pursuing her master degree at the

with the Department of Electrical Engineering in University of

University of South China. Her technical

South China. His research interests include pattern recognition,

interests include image fusion, image

image processing, data fusion, computational visual attention.

processing, image restoration and image

3

Chao Yang received her B.E. degree in Electrical &

Communications from University of South China in 2013. Currently he is a master candidate in Physical Electronics, University of South China. Her research interests include compressive sensing theory, image restoration and image fusion.

- 72 http://www.sjie.org


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.