International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014
Performance Evaluation of Various Pixel Level Fusion Methods for Satellite Remote Sensor Images G. Dheepa *1, Dr. S. Sukumaran 2,
1
(Ph.D scholar, Department of Computer Science Erode Arts and Science College, Erode, Tamilnadu, India) 2 (Associate Professor, Department of Computer Science Erode Arts and Science College, Tamilnadu, India.)
ABSTRACT:
Remote Sensing systems that are deployed on satellites transmit two types of images to the ground. They are the panchromatic (PAN) image with the high resolution and the multispectral (MS) image with the coarser resolution. Several GIS applications require both high spatial and high spectral information in a single image. Satellite image fusion aims to integrate the spatial detail of a highresolution panchromatic image (PAN) and the color information of a low-resolution multispectral (MS) image to produce a high resolution multispectral image. There are many existing PAN sharpening techniques or Pixel –Based image fusion techniques to enhance the spatial resolution and the spectral property preservation of the MS image. This paper attempts to evaluate the performance of the various Pixel level fusion methods and assess the image quality using various indices.
Keywords: Image fusion, Pixel–level fusion, Brovey transform, HIS transform fusion, Wavelet Transform.
1. INTRODUCTION Satellites usually take several images from frequency bands in the visual and non-visual range. Each monochrome image is referred to as a band and a collection of several bands of the same scene acquired by a sensor is called multispectral image (MS)[1]. A combination of three bands associated in a RGB (Red, Green, Blue) color system produce a color image. Most of the earth observation satellites such as Spot, Ikonos, Quickbird, Formosat or Orbview and also some digital airborne sensors like DMC or UltraCam record image data in two different modes, a low-resolution multispectral and high-resolution panchromatic mode. A high spatial resolution panchromatic image (Pan) gives detailed geometric features, while the multispectral images contain richer spectral information. In general, a Pan image covers a broader wavelength range, while a MS band covers a narrower spectral range. To receive the same amount of incoming energy, the size of a Pan detector can be smaller than that of a MS detector. Therefore, on the same satellite or airplane platform, the resolution of the Pan sensor can be higher than that of the MS sensor. In addition, the data volume of a highresolution MS image is significantly greater than that of a bundled high- resolution Pan image and lowresolution MS image. This bundled solution can mitigate the problems of limited on-board storage * Corresponding author
ISSN: 2231-2803
capacity and limited data transmission rates from platform to ground. Considering these limitations, it is clear that the most effective solution for providing high-spatial-resolution and high-spectral-resolution remote sensing images is to develop effective image fusion techniques. The objective of iconic image fusion is to combine the panchromatic and the multispectral information to form a fused multispectral image that retains the spatial information from the high resolution panchromatic image and the spectral characteristics of the lower resolution multispectral image. Applications for integrated image datasets include environmental/agriculture assessment, urban mapping, and change detection [2]. With appropriate algorithms it is possible to combine multispectral and panchromatic bands and produce a synthetic image with their best characteristics. This process is known as satellite multisensor merging, fusion, or sharpening [3]. So far, many pixel-level fusion methods for remote sensing image have been presented in the literature. Pixels should have the same spatial resolution from two different sources that are manipulated to obtain the resultant image. So, before fusing two sources at a pixel level, it is necessary to perform a geometric registration and a radiometric adjustment of the images to one another. When images are obtained from sensors of different satellites as in the case of fusion of SPOT or IRS with Landsat, the registration accuracy is very important. But registration is not much of a problem with simultaneously acquired images as in the case of Ikonos/Quickbird PAN and MS images. The PAN images have a different spatial resolution from that of MS images. Therefore, resampling of MS images to the spatial resolution of PAN is an essential step in some fusion methods to bring the MS images to the same size of PAN. 2. PIXEL LEVEL FUSION METHODS In general, the algorithms for remote sensing image pixel level fusion can be divided into four general categories namely Arithmetic Combination techniques (AC), Component Substitution (CS) fusion technique, Frequency Filtering method (FFM) and Multi-Resolution Analysis (MRA) based fusion technique. 2.1 Arithmetic Combination techniques (AC) AC methods directly perform some type of arithmetic operation on the MS and PAN bands such as addition, multiplication, normalized division, ratios and subtraction which have been combined in
http://www.ijcttjournal.org
Page184
International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014 different ways to achieve a better fusion effect. These models assume that there is high correlation between the PAN and each of the MS bands [7]. One of the techniques is the Brovey Transform method (BT). Brovey transform (BT) Brovey Transform uses addition, division and multiplication for the fusion of three multispectral bands. Its basic processing steps are: (1) add three multispectral bands together for a sum image, (2) divide each multispectral band by the sum image, (3) multiply each quotient by a high resolution pan [8]. Given the multispectral MSi(i=1,2,3) and Pan images, the fused image FUSi , for each band, is obtained as FUSi
Msi PAN n i 1 Msi
The Brovey Transform was developed to provide contrast in features such as shadows, water and high reflectance areas. It was created to produce RGB images, and therefore only three bands at a time can be merged [9]. 2.2 Component substitution (CS) Fusion techniques The typical algorithms of component substitution fusion technique are Intensity-hue-saturation (HIS) color transform The HIS color transform can effectively separate a standard RGB (Red, Green, Blue) image into spatial (I) and spectral (H, S) information. The basic concept of IHS fusion is shown in Fig.1.
Fig. 2. Schematic flowchart of PCA image fusion Its most important steps are: (1) perform a principal component transformation to convert a set of multispectral bands (three or more bands) into a set of principal components, (2) replace one principal component, usually the first component, by a high resolution panchromatic image, (3) perform a reverse principal component transformation to convert the replaced components back to the original image space. A set of fused multispectral bands is produced after the reverse transform. Compared to the HIS fusion, the PCA fusion has the advantage that the MS images is allowed to contain more than three bands. For instance, the near infrared component is also taken into account, or the MS image can be from more than one satellite sensors. So this method is taken for experimental analysis. 2.3 Frequency Filtering method (FFM) This method makes use of classical filter techniques in the spatial domain. Some of the popular FFM for pan sharpening are [12] High-Pass Filter (HPF )
Fig. 1. Schematic flowchart of HIS image fusion The most important steps are: (1) transform a color image composite from the RGB space into the IHS space, (2) because the intensity (I) band resembles a panchromatic image, replace the I component by a panchromatic image with a higher resolution, (3) reversely transform the replaced components from IHS space back to the original RGB space to obtain a fused image.
Principal Component Analysis (PCA) technique
The PCA is a statistical technique that transforms a multivariate inter-correlated data set into a new un-correlated data set. The basic concept of PCA fusion is shown in Fig. 2.
ISSN: 2231-2803
The principle of HPF is to add the highfrequency information from the High Resolution Pan Image (HRPI) to the Low Resolution MS Image (LRMI) to get the High Resolution MS Image (HRMI) [15]–[17]. The high-frequency information is computed by filtering the HRPI with a high-pass filter or taking the original HRPI and subtracting the Low Resolution Pan Image (LRPI), which is the low-pass filtered HRPI. The mathematical model is
MS HR MS LR ( PAN HR PAN LR ) Where PAN LR = PAN HR * h0 and h0 is a low-pass filter such as a boxcar filter. When boxcar filters are used, the filter length is crucial and must match the resolution ratio of the HRPI and LRMIs. High Pass Modulation (HPM) The principle of HPM is to transfer the highfrequency information of the HRMI to the LRMIs, with modulation coefficients, which equal the ratio between the LRMIs and the LRPI. The LRPI is obtained by low-pass filtering the HRPI. The corresponding mathematical model is [16] MS LR MS HR MS LR ( PAN HR PAN LR ) PAN LR
http://www.ijcttjournal.org
Page185
International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014 Where PAN LR = PAN HR * h0 and is h0 the same low-pass filter as used in the HPF method. If the contribution of the NIR band is considered in image fusion then HPF is slightly better than HPM. So HPF is taken here for analysis. 2.4 Multi-Resolution Analysis (MRA) based fusion technique MRA-based fusion techniques [21] adopt multi-scale decomposition methods such as wavelet transform [6] to decompose multi-spectral and panchromatic images with different levels, and then derive spatial details that are imported into finer scales of the multi-spectral images resulting in enhancement of spatial details.
À Trous Algorithm-Based Wavelet Transform
w j p j 1 p j , j 1,...., r , p0 p
Wavelet Transform decomposes a digital image into a set of multi-resolution images accompanied with wavelet coefficients for each resolution level. The wavelet coefficients for each level contain the spatial (detail) differences between two successive resolution levels. The wavelet based fusion is performed in the following way (Figure 3): (1) decompose a high resolution panchromatic image into a set of low resolution panchromatic images with wavelet coefficients for each level, (2) replace a low resolution panchromatic with a multispectral band at the same resolution level, (3) perform a reverse wavelet transform to convert the decomposed and replaced panchromatic set back to the original panchromatic resolution level. The replacement and reverse transform is done three times, each for one multispectral band. Wavelet based fusion methods usually comprise two algorithms Mallat and à trous Algorithm [14]. The Mallat algorithm based dyadic wavelet transform which use decimation, is not shift invariant. Whereas the à trous based dyadic wavelet transform which does not use decimation, is shift invariant. Therefore, the à trous algorithm works better than the Mallat algorithm. The key is to choose decomposition level and the wavelet style which affect the fusion results. To decompose the data into wavelet coefficients, we use the discrete wavelet transform algorithm known as “`a trous”. The à trous algorithm based image fusion is performed in two ways: Substitute Wavelet (SW) method and Additive-Wavelet (AW) method. In the SW method, the high frequency of HRPI substitutes the low frequency of LRMI. SW method eliminates the frequency of LRMI and also there is a loss of LRMI information. In AW method, the high frequency of HRPI is added with LRMI. AW method maintains the frequency of LRMI and its information. But there exists a redundancy of high frequency in this method. The redundancy can also be reduced by introducing LRPI. LRPI is obtained using Gaussian Low Pass Filter.
r j 1
where r refers to the decomposition level. To construct the sequence, perform successive convolutions with a filter h0 obtained from the scaling function. The use of a B3 cubic spline yields a dyadic low-pass scaling function such as h0 = (1/16)[1,4,6,4,1] in one dimension. The AW method is given by n
HRMI i LRMI i w( HRPI LRPI ) j 1
where, n refers to the decomposition level.
3. PERFORMANCE EVALUATION Mathematical methods were used to judge the quality of fused image in respect to their improvement of spatial resolution while preserving the spectral content of the data. The fused image is evaluated spectrally and spatially to provide quantitative comparison of different fusion algorithms. In order to measure the spectral distortion, each merged image has to be compared with original multispectral images using quantitative indicators. The correlation coefficient is most widely used similarity metric and is insensitive to constant gain and bias between two images. Another commonly used assessment metric is the Root Mean Square Error (RMSE). Multimodal statistical indices such as UIQI and ERGAS have also been calculated to compare the fusion results.
Correlation Coefficient (CC) The correlation coefficient measures the closeness or similarity in small size structures between the original and the fused images. It ranges from -1 to +1.Values close to +1 indicates that they are highly similar while the values close to -1 indicate that they are highly dissimilar. N
N
( MS CC
i, j
M S )( Fi , j F )
i 1 j 1 N
N
( MS
N i, j
N
M S ) 2 ( Fi , j F ) 2
i 1 j 1
i 1 j 1
where CC is the Correlation Coefficient, F is the fused image and i and j are pixels, MS is the multispectral data.
ISSN: 2231-2803
The reconstruction formula can be written as
p pr w j
Wavelet Transform (WT)
Given an image p, the sequence of its approximations by multiresolution decomposition is F1 p p 1 , F 2 p p 2 ,.... The wavelet planes wj are computed as the differences between two consecutive approximations[20]
Root mean square error (RMSE) RMSE is the root mean square error
http://www.ijcttjournal.org
Page186
International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014 computed between the degraded fused image and the original image. The best possible value is zero[20].
RMSE
1 M N I r (i, j ) I f (i, j )2 MN i 1 j 1
ERGAS ERGAS is the abbreviation of Erreur Relative Globale Adimensionnelle de Synthèse (Relative global dimensional error).For the comparison of different fusion methods the spatial and spectral quality is taken into account. Spatial quality is estimated by the sharpness of the edges whereas the spectral quality estimation is done with many matrices. ERGAS is one among that which calculates the amount of spectral distortion and the formula is given by:
ERGAS 100
h l
1 N
N
2
RMSE ( Bi ) ( M i ) 2 i 1
where N is the number of bands involved in fusion, h/l is the ratio of the spatial resolution of original Pan and MS images. Mi is the mean value for the original spectral image Bi. ERGAS values larger than 3 stand for synthesized images of low quality, while an error ERGAS less than 3 represent a satisfactory quality [20].
Universal Image Quality Index (UIQI) The UIQI [13] measures how much of the salient information contained in reference image is transferred to the fused image. UIQI is devised by considering loss of correlation, luminance distortion and contrast distortion. The range of this metrics varies from -1 to +1 and the best value is 1.
Fig. 3. Original panchromatic image
ISSN: 2231-2803
UIQI
AB 2 A B 2 A B A B A2 B2 A2 B2
where σ represents the standard deviation and µ represents the mean value. The first term in RHS is the correlation coefficient, the second term represents the mean luminance and the third measures the contrast distortion. Here UIQI is used to measure the similarity between the two images. In order to test, the synthesized High Resolution MS Images are spatially degraded to the resolution level of the original MS Images (4m) by bicubic interpolation. Then the UIQIs are computed between the degraded image and the original MS images (RGB and NIR bands) at the 4–m resolution level.
4. EXPERIMENTAL RESULTS For the evaluation of fusion methods, the Ikonos-2 panchromatic band of the 1-m resolution Pan image and the Red, Green, Blue and Near Infrared bands of the 4-m resolution multispectral images are used. The 4m resolution multi-spectral bands are resampled to 1-m pixel size before fusion is done. The original panchromatic image and resampled MS image are shown in Fig.3 and Fig.4 respectively. The resampled multispectral and Pan bands are then fused using BT, PCA, HPF, WT algorithms and the corresponding fusion results are shown in Fig.5a, Fig 5b, Fig.5c and Fig.5d. Also the quantitative evaluation results are shown in Table.1. To simplify the comparison of the different fusion methods, the values of quantitative indicators CC, RMSE, ERGAS and UIQI of the fused images are provided as chart in Fig.6a, Fig.6b, Fig.6c and Fig.6d respectively.
Fig. 4. Resampled multispectral image
http://www.ijcttjournal.org
Page187
International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014
Fig. 5a. BT fusion result
Fig. 5b. PCA fusion result
Fig. 5c. HPF fusion result
Fig. 5d. WT fusion result
ISSN: 2231-2803
http://www.ijcttjournal.org
Page188
International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014
Table .1. Evaluation results of fusion methods Methods Assessment Indices CC
BT
PCA
HPF
WT
0.3823
0.2547
0.9262
0.9327
RMSE
10.4302
11.3506
0.9845
0.9413
ERGAS
18.5813
18.6865
3.1542
2.9421
R
0.5658
0.4387
0.7929
0.8775
G B NIR
0.3876 0.1947 0.8428
0.4321 0.1913 0.7452
0.7956 0.7279 0.8469
0.8213 0.7649 0.8502
UIQI
Fig .6a. Correlation coefficient for various fused methods
Fig. 6b. RMSE for various fused methods
Fig. 6c. ERGAS for various fused methods
ISSN: 2231-2803
http://www.ijcttjournal.org
Page189
International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014
Fig. 6d. UIQI of various fused methods By combining the visual inspection results and the quantitative results, it can be seen that the experimental results are in compliance with the theoretical analysis. The BT and PCA methods produce considerable spectral distortion; the HPF method produce slight spectral distortion. WT method preserves the spectral and spatial information of the objects in original images better than the results generated by the other methods. The WT method produces the images closest to those the corresponding multi-sensors would observe at the high-resolution level.
REFERENCES [1]
Wenbo W.,Y.Jing, and K. Tingjun , “Study Of Remote Sensing Image Fusion And Its Application In Image Classification” The Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing ,Vol. XXXVI, Part B7, 2008, pp. 1141-1146.
[2]
K.Edwards and P.A.Davis, “The use of Intensity-HueSaturation transformation for producing color shaded-relief images,” Photogramm. Eng. Remote Sens., vol. 60, no. 11, 1994, pp. 1369–1374.
[3]
J.G.Liu, “Smoothing filter-based intensity modulation:A spectral preserve image fusion technique for improving spatial details,” Int. J. Remote Sens., vol. 21, no. 18, 2000, pp. 3461–3472.
[4]
T.M.Tu, S.C.Su, H.C.Shyu, and P.S.Huang, “A new look at IHS-like image fusion methods,” Inf. Fusion, vol.2, no. 3, 2001, pp. 177–186.
[5]
R. A. Schowengerdt, “Remote Sensing: Models and Methods for Image Processing,” 2nd ed. Orlando, FL: Academic, 1997.
[6]
J. N´u˜nez, X. Otazu, O. Fors, A. Prades, V. c Pal`a, and R.Arbiol, “ Multiresolution-Based Image Fusion with Additive Wavelet Decomposition,” IEEE Transactions on Geoscience and Remote Sensing, vol.37, 1999, pp.1204 – 1211.
[7]
T. Ranchin and L. Wald, “Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation,” Photogramm. Eng. Remote Sens., vol. 66, no. 1, 2000, pp. 49–61.
[8]
B.Aiazzi, L.Alparone, S.Baronti, and A.Garzelli, “Contextdriven fusion of high spatial and spectral resolution images based on over sampled multi-resolution analysis,” IEEE Trans. Geosci. Remote Sens., vol. 40, no. 10, Oct. 2002, pp. 2300–2312.
[9]
J. G. Liu and J. M. Moore, “Pixel block intensity modulation: Adding spatial detail to TM band 6 thermal imagery,” Int. J. Remote Sens., vol. 19, no. 13, 1998, pp. 2477–2491.
5. CONCLUSION Image Fusion aims at the integration of the geometric detail of a high-resolution panchromatic image and the color information of a low-resolution multispectral image to produce a high-resolution MS image. This paper presents the four categories of pixel level image fusion methods which are Arithmetic Combination techniques (AC), Component Substitution (CS) fusion techniques, Frequency Filtering methods (FFM) and MultiResolution Analysis (MRA) based fusion techniques and their performance is analyzed using various quantitative indicators. The fusion techniques BT, HIS and PCA provide superior visual high resolution MS images but ignore the requirement of high-quality synthesis of spectral information producing more spectral distortion. HPF and HPM show better performance in terms of the high-quality synthesis of spectral information. But when compared with wavelet fusion, WT shows better results qualitatively and quantitatively. So we inference that Wavelet Transform fusion method provides computationally efficient image fusion techniques. So we intend to focus more on wavelet based fusion techniques so that we could improve the existing. At present, it is generally applied into pixel level fusion. For feature and decision level fusion, it is necessary to study its application further.
[10] Y. Zhang, “A new merging method and its spectral and spatial effects,” Int. J. Remote Sens., vol. 20, no. 10, 1999, pp. 2003– 2014. [11]
Ehlers M., S. Klonusa, P. Johan and P. Rosso , “Multi-sensor image fusion for pansharpening in remote sensing”, International Journal of Image and Data Fusion, Vol. 1, No. 1, March 2010, pp. 25–45.
[12] R. A. Schowengerdt, Remote Sensing: Models and Methods for Image Processing, 2nd ed. Orlando, FL: Academic, 1997.
ISSN: 2231-2803
http://www.ijcttjournal.org
Page190
International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014
[13]
Wang Z. and A.C. Bovik, “A universal image quality index,” IEEE Signal Process Lett., 9(3), 2002, pp. 81-84.
[14]
P. Dutilleux, “An implementation of the “algorithme à trous” to compute the wavelet transform,” Eds. Berlin, Germany: Springer-Verlag, 1989, pp. 298–304.
[15]
K.Shivsubramani, P soman, Krishnamoorthy, “Implementation and Comparative Study of Image Fusion Algorithms”, International Journal of Computer Applications (0975 – 8887) Volume 9, No.2, November 2010, pp.3-6..
[16] Anjali Malviya and S.G.Bhirud ,“Image Fusion of Digital Images”, Int. J. Recent Trends in Engineering, Vol.2, No.3, November 2009, pp. 2-4. [17]
V.P.S Naidu and J.R.Raol, “Pixel level Image fusion using wavelets and Principal component analysis”, Defence science Journal, Vol.58, No.3, May 2008, p p. 338-352.
[18]
A. Goshtasby and S. G. Nikolov, “Image fusion: Advances in the state of the art”, Editorial- Science Direct, Special Issue on Image fusion, 8(2), April 2007 , pp.114-118.
[19]
H. Wang, J. Peng, and W.Wu, ” Fusi on algorithm for multisensor image based on discrete multi wavelet transform,” IEEE Proc. Visual Image Signal Process., 149(5), 2002.
[20]
L. Wald, T. Ranchin, and M. Mangolini, “Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images,”Photogramm. Eng. Remote Sens., vol.63, no. 6,1997, pp. 691–699.
[21]
K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques - An introduction, review and comparison,” ISPRS Journal of Photogrammetry & Remote Sensing, vol.62, 2007, pp. 249–263.
ISSN: 2231-2803
http://www.ijcttjournal.org
Page191