A New Pan-Sharpening Method Using Joint Sparse FI Image Fusion Algorithm

نویسنده

  • Ashish Dhore
چکیده

Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. The SparseFI method does not assume any spectral composition model of the panchromatic image and due to the super-resolution capability and robustness of sparse signal reconstruction algorithms, it gives higher spatial resolution and, in most cases, less spectral distortion compared with the conventional methods. Comparison among the proposed technique and existing processes such as intensity hue saturation (IHS) image fusion, Brovey transform, principal component analysis, fast IHS image fusion has been done. The pan-sharpened high-resolution MS image by the proposed method is competitive or even superior to those images fused by other well-known methods. In this paper, we propose a new pan-sharpening method named Joint Sparse Fusion of Images (JSparseFI). The pan-sharpened images are quantitatively evaluated for their spatial and spectral quality using a set of well-established measures in the field of remote sensing. The evaluation metrics are ERGAS, Q4 and SAM which measure the spectral quality.To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. Keywords— JSparseFI, Compressed sensing, image fusion, multispectral(MS) image, panchromatic (PAN) image, remote sensing, sparse representation. INTRODUCTION ―Pan Sharpening‖ is shorthand for ―Panchromatic sharpening‖. It means using a panchromatic (single band) image to ―sharpen‖ a multispectral image. In this sense, to ―sharpen‖ means to increase the spatial resolution of a multispectral image. A multispectral image contains a higher degree of spectral resolution than a panchromatic image, while often a panchromatic image will have a higher spatial resolution than a multispectral image. A pan sharpened image represents a sensor fusion between the multispectral and panchromatic images which gives the best of both image types, high spectral resolution AND high spatial resolution. This is the simple why of pan sharpening. Pan-sharpening is defined as the process of synthesizing an MS image at a higher spatial resolution that is equivalent to the one of the PAN image. Pan-sharpening should enhance the spatial resolution of MS image while preserving its spectral resolution. Pan-sharpening continues to receive attention over years. Most of this paper is concerned with the how of pan sharpening. First, a review of some fundamental concepts is in order. A) Multispectral Data A multispectral image is an image that contains more than one spectral band. It is formed by a sensor which is capable of separating light reflected from the earth into discrete spectral bands. A color image is a very simpleexample of a multispectral image that contains three bands. In this case, the bands correspond to the blue, green and red wavelength bands of the electromagnetic spectrum. The full electromagnetic spectrum covers all forms of radiation, from extremely shortwavelength gamma rays through long wavelength radio wave.In Remote Sensing imagery, we are limited to radiation that is either reflected or emitted from the earth, that can also pass through the atmosphere to the sensor.The electromagnetic spectrum is the wavelength(or frequency) mapping of electromagnetic energy, as shown below. International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014 ISSN 2091-2730 448 www.ijergs.org Fig. 1 : Electromagnetic spectrum Electro-optical sensors sense solar radiation that originates at the sun and is reflected from the earth in the visible to nearinfrared (just to the right of red in the figure above) region. Thermal sensors sense solar radiation that is absorbed by the earth and emitted as longer wavelength thermal radiation in the mid to far infrared regions. Radar sensors provide their own source of energy in the form of microwaves that are bounced off of the earth back to the sensor. A conceptual diagram of a multispectral sensor is shown below. Fig. 2: Simplified diagram of a multispectral scanner In this diagram, the incoming radiation is separated into spectral bands using a prism. We have all seen how a prism is able to do this and we have seen the earth’s atmosphere act like a prism when we see rainbows. In practice, prisms are rarely used in modern sensors.Instead, a diffraction grating which is a piece of material with many thin grooves carved into it is used. The grooves cause the light to be reflected and transmitted in different directions depending on wavelength. You can see a rough example of a diffraction grating when you look at a CD and notice the multi-color effect of light reflecting off of it as you tilt it at different angles. After separating the light into different ―bins‖ based on wavelength ranges, the multispectral sensor forms an image from each of the bins and then combines them into a single image for exploitation.Multispectral images are designed to take advantage of the different spectral properties of materials on the earth’s surface. The most common example is for detection of healthy vegetation. Since healthy vegetation reflects much more near-infrared light than visible light, a sensor which combines visible and near-infrared bands can be used to detect health and less healthy vegetation. Typically this is done with one or more vegetation indices such as the Normalized DifferenceVegetation Index (NDVI) defined as the ratio of the difference of the red and near-infrared reflectance divided by the sum of these two values. Some typical spectral signatures of vegetation, soil and water are shown below, International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014 ISSN 2091-2730 449 www.ijergs.org Fig. 3: Reflectance spectra of some common materials. Red, Green and Blue regions of the spectrum are shown. Near-IR is just to the right of the Red band. Ultraviolet is to the left of the Blue band. These are only representative spectra. Each type of vegetation, water, soil and other surface type havedifferent reflectance spectra, and outside of a laboratory, these also depend on the sun’s position in the sky and the satellite’s position as well.When there are more bands covering more parts of the electromagnetic spectrum, more materials can be identified using more advanced algorithms such as supervised and unsupervised classification, in addition to the simple but effective band ratioand normalization methods such as the NDVI.Remote View has several tools which take advantage of multispectral data including the Image Calculator for performing NDVI and other indices and a robust Multispectral Classification capability which includes both supervised and unsupervised classification. This paper however is focused on the Pan Sharpening tools within Remote View. B) Panchromatic data In contrast to the multispectral image, a panchromatic image contains only one wide band of reflectance data. The data is usually representative of a range of bands and wavelengths, such as visible or thermal infrared, that is, it combines many colors so it is ―pan‖ chromatic. A panchromatic image of the visible bands is more or less a combination of red, green and blue data into a single measure of reflectance. Modern multispectral scanners also generally include some radiation at slightly longer wavelengths than red light, called ―near infrared‖ radiation. Panchromatic images can generally be collected with higher spatial resolution than a multispectral image because the broad spectral range allows smaller detectors to be used while maintaining a high signal to noise ratio. For example, 4-band multispectral data is available from QuickBird and GeoEye. For each of these, the panchromatic spatial resolution is about four times better than the multispectral data. Panchromatic imagery from QuickBird-3 has a spatial resolution of about 0.6 meters. The same sensor collects the nearly the multispectral data at about 2.4 meters resolution. For GeoEye’s Ikonos, the panchromatic and multispectral spatial resolutions are about 1.0 meters and 4.0 meters respectively. Both sensors can collect co registered panchromatic and four-band (red, green, blue and near-infrared) multispectral images. The developments in the field of sensing technologies multisensor systems have become a reality in a various fields such as remote sensing, medical imaging, machine vision and the military applications for which they were developed. The result of the use of these techniques is an increase of the amount of data available. Image fusion provides an effective way of reducing the increasing volume of information while at the same time extracting all the useful information from the source images. Multi-sensor data often presents complementary information, so image fusion provides an effective method to enable comparison and analysis of data. The aim of image fusion, apart from reducing the amount of data, is to create new images that are more suitable for the purposes of human/machine perception, and for further imageprocessing tasks such as segmentation, object detection or target recognition in applications such as remote sensing and medical imaging. For example, visible-band and infrared images may be fused to aid pilots landing aircraft in poor visibility. A remote sensing platform uses a variety of sensors. Of the fundamental ones are panchromatic (PAN) sensor and MultiSpectral (MS) sensor. The PAN sensor has a higher spatial resolution. In other words, each pixel in the PAN image covers a smaller International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014 ISSN 2091-2730 450 www.ijergs.org area on the ground compared to the MS image from the same platform. On the other hand, the MS sensor has a higher spectral resolution, which means that it corresponds to a narrower range of electromagnetic wavelengths compared to the PAN sensor. There are several reasons behind not having a single sensor with both high spatial and high spectral resolutions. One reason is the incoming radiation energy. As the PAN sensor covers a broader range of the spectrum, its size can be smaller while receiving the same amount of radiation energy as the MS sensor. Other reasons include limitation of on-board storage capabilities and communication bandwidth. I. DIFFERENT METHODS TO PERFORM PAN-SHARPENING A) IHS Image Fusion: -IHS is one of the most widespread image fusion methods in remote sensing applications. The IHS transform is a technique where RGB space is replaced in the IHS space by intensity (I), hue (H), and saturation (S) level. The fusion process that uses this IHS transform is done by the following three steps. 1) First, it converts the RGB space into the IHS space (IHS transform). 2) The value of intensity I (= (R + G + B)/3) is replaced by the value of PAN. 3) Then retransformed back into the original RGB space B) PCA Method: -The PCA technique is a decorrelation scheme used for various mapping and information extraction in remote sensing image data. The procedure to merge the RGB and the PAN image using the PCA fusion method is similar to that of the IHS method. The fusion process that uses this PCA is done by the following three steps. 1) First, it converts the RGB space into the first principal component (PC1), the second principalcomponent (PC2), and the third principal component (PC3) by PCA 2) The first principal component (PC1) of the PCA space is replaced by the value of the PAN image. 3) The retransformed back into the original RGB space (reverse PCA) C) Brovery Transform (BT):BT is a simple image fusion method that preserves the relative spectral contributions of each pixel but replaces its overall brightness with the high-resolution PAN image II. SPARSEFI ALGORITHM FOR IMAGE FUSION Pan-sharpening requires a low-resolution (LR) multispectral image Y with N channels and a high-resolution (HR) panchromatic image X0 and aims at increasing the spatial resolution of Y while keeping its spectral information, i.e ,generating an HR multispectral image X utilizing both Y and X0 as inputs. The Sparse FI algorithm reconstructs the HR multispectral image in an efficient way by ensuring both high spatial and spectral resolution with less spectral distortion.It consists of three main steps: 1) Dictionary learning 2) Sparse coefficients estimation 3) HR multispectral image reconstruction A) Dictionary Learning The HR pan image X0 is low-pass filtered and down sampled by a factor of FDS (typically 4–10) such that its final explored spread function is similar to the original image. The resulting LR version of X0 is called Y0. This Y0 is combined with the co registration of the different channels that is required, anyway. The LR pan image Y0 and the LR multispectral image Y are tiled into small, partially overlapping patches Y0 and Yk, where k stands for the k th channel and k =1,...,N. All the LR patches Y0 with pixel values arranged in column vectors form the matrix Dl called the LR dictionary. Likewise, the HR dictionary Dh is generated by tiling the HR pan image X0 into patches X0 of FDS times the size as the LR pan image patches, such that each HR patch corresponds to an LR patch. These image patches are called ―atoms‖ of the dictionaries. B) Sparse Coefficients Estimation Sparse coefficients are estimated according to the atoms having least number of PAN patches in the LR dictionary. The atoms in the dictionary are orthogonal because they can exhibit infinite number of solution. In this step an attempt has been made to International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014 ISSN 2091-2730 451 www.ijergs.org represent each LR multispectral patch in the particular channel as a linear combination of LR PAN patches. These are referred as the atoms of the dictionary represented by the coefficient vector. Fig 4:Flow chart of sparse FI method C)HR Multispectral Image Reconstruction Since each of the HR image patches Xk is assumed to share the same sparse coefficients as the corresponding LR image patch Yk in the coupled HR/LR dictionary pair, i.e., the coefficients of Xk in Dh are identical to the coefficients of Yk in Dl, the final sharpened multispectral image patches Xk are reconstructed by,

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Review of Image Fusion Algorithms Based on the Super-Resolution Paradigm

A critical analysis of remote sensing image fusion methods based on the super-resolution (SR) paradigm is presented in this paper. Very recent algorithms have been selected among the pioneering studies adopting a new methodology and the most promising solutions. After introducing the concept of super-resolution and modeling the approach as a constrained optimization problem, different SR soluti...

متن کامل

Compressive Sensing for Pan-sharpening

Based on compressive sensing framework and sparse reconstruction technology, a new pan-sharpening method, named Sparse Fusion of Images (SparseFI, pronounced as sparsify), is proposed in [1]. In this paper, the proposed SparseFI algorithm is validated using UltraCam and WorldView-2 data. Visual and statistic analysis show superior performance of SparseFI compared to the existing conventional pa...

متن کامل

A genetic approach to Pan-sharpening of multispectral images

This paper presents a novel image fusion method, suitable for sharpening of multispectral (MS) images by means of a panchromatic (Pan) observation, based on a modified generalized intensity-hue-saturation algorithm: the MS bands expanded to the finer scale of the Pan image are sharpened by adding the spatial details derived from a difference image, which is calculated by the Pan image and a lin...

متن کامل

Pan-sharpening with a Bayesian nonparametric dictionary learning model

Pan-sharpening, a method for constructing high resolution images from low resolution observations, has recently been explored from the perspective of compressed sensing and sparse representation theory. We present a new pansharpening algorithm that uses a Bayesian nonparametric dictionary learning model to give an underlying sparse representation for image reconstruction. In contrast to existin...

متن کامل

Fusion of Panchromatic and Multispectral Images Using Non Subsampled Contourlet Transform and FFT Based Spectral Histogram (RESEARCH NOTE)

Image fusion is a method for obtaining a highly informative image by merging the relative information of an object obtained from two or more image sources of the same scene. The satellite cameras give a single band panchromatic (PAN) image with high spatial information and multispectral (MS) image with more spectral information. The problem exists today is either PAN or MS image is available fr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014