International Journal of Computer Trends and Technology (IJCTT) – volume 8 number 4– Feb 2014
Performance Evaluation of Various Pixel Level Fusion Methods for Satellite Remote Sensor Images G. Dheepa *1, Dr. S. Sukumaran 2,
1
(Ph.D scholar, Department of Computer Science Erode Arts and Science College, Erode, Tamilnadu, India) 2 (Associate Professor, Department of Computer Science Erode Arts and Science College, Tamilnadu, India.)
ABSTRACT:
Remote Sensing systems that are deployed on satellites transmit two types of images to the ground. They are the panchromatic (PAN) image with the high resolution and the multispectral (MS) image with the coarser resolution. Several GIS applications require both high spatial and high spectral information in a single image. Satellite image fusion aims to integrate the spatial detail of a highresolution panchromatic image (PAN) and the color information of a low-resolution multispectral (MS) image to produce a high resolution multispectral image. There are many existing PAN sharpening techniques or Pixel –Based image fusion techniques to enhance the spatial resolution and the spectral property preservation of the MS image. This paper attempts to evaluate the performance of the various Pixel level fusion methods and assess the image quality using various indices.
Keywords: Image fusion, Pixel–level fusion, Brovey transform, HIS transform fusion, Wavelet Transform.
1. INTRODUCTION Satellites usually take several images from frequency bands in the visual and non-visual range. Each monochrome image is referred to as a band and a collection of several bands of the same scene acquired by a sensor is called multispectral image (MS)[1]. A combination of three bands associated in a RGB (Red, Green, Blue) color system produce a color image. Most of the earth observation satellites such as Spot, Ikonos, Quickbird, Formosat or Orbview and also some digital airborne sensors like DMC or UltraCam record image data in two different modes, a low-resolution multispectral and high-resolution panchromatic mode. A high spatial resolution panchromatic image (Pan) gives detailed geometric features, while the multispectral images contain richer spectral information. In general, a Pan image covers a broader wavelength range, while a MS band covers a narrower spectral range. To receive the same amount of incoming energy, the size of a Pan detector can be smaller than that of a MS detector. Therefore, on the same satellite or airplane platform, the resolution of the Pan sensor can be higher than that of the MS sensor. In addition, the data volume of a highresolution MS image is significantly greater than that of a bundled high- resolution Pan image and lowresolution MS image. This bundled solution can mitigate the problems of limited on-board storage * Corresponding author
ISSN: 2231-2803
capacity and limited data transmission rates from platform to ground. Considering these limitations, it is clear that the most effective solution for providing high-spatial-resolution and high-spectral-resolution remote sensing images is to develop effective image fusion techniques. The objective of iconic image fusion is to combine the panchromatic and the multispectral information to form a fused multispectral image that retains the spatial information from the high resolution panchromatic image and the spectral characteristics of the lower resolution multispectral image. Applications for integrated image datasets include environmental/agriculture assessment, urban mapping, and change detection [2]. With appropriate algorithms it is possible to combine multispectral and panchromatic bands and produce a synthetic image with their best characteristics. This process is known as satellite multisensor merging, fusion, or sharpening [3]. So far, many pixel-level fusion methods for remote sensing image have been presented in the literature. Pixels should have the same spatial resolution from two different sources that are manipulated to obtain the resultant image. So, before fusing two sources at a pixel level, it is necessary to perform a geometric registration and a radiometric adjustment of the images to one another. When images are obtained from sensors of different satellites as in the case of fusion of SPOT or IRS with Landsat, the registration accuracy is very important. But registration is not much of a problem with simultaneously acquired images as in the case of Ikonos/Quickbird PAN and MS images. The PAN images have a different spatial resolution from that of MS images. Therefore, resampling of MS images to the spatial resolution of PAN is an essential step in some fusion methods to bring the MS images to the same size of PAN. 2. PIXEL LEVEL FUSION METHODS In general, the algorithms for remote sensing image pixel level fusion can be divided into four general categories namely Arithmetic Combination techniques (AC), Component Substitution (CS) fusion technique, Frequency Filtering method (FFM) and Multi-Resolution Analysis (MRA) based fusion technique. 2.1 Arithmetic Combination techniques (AC) AC methods directly perform some type of arithmetic operation on the MS and PAN bands such as addition, multiplication, normalized division, ratios and subtraction which have been combined in
http://www.ijcttjournal.org
Page184