Hyperspectral Imaging Seminar HI Sensor Fusion

Noa Privman Horesh
December 2012
Many uses for fusion:
 Visualization – fuse between bands
 Sharpening - fusion between hyperspectaral
image and panchromatic image
 Detection and classification - fusion
between hyperspectaral image and
panchromatic/ FOPEN SAR / LIDAR image
In lecture 3 – Displaying of Hyperspectral
Images on RGB Displays we saw several
1BT Based Band Selection
Principal Components Analysis
Spectral Weighting Envelopes
Visualization – cont’
 Another method for getting better
visualization - hierarchical fusion based on
vector quantization and bilateral filtering.
Hierarchical Fusion Using Vector
Quantization for Visualization of
Hyperspectral Images
 A typical hyperspectral image data set in
remote sensing contains a few hundred
images to be fused into a single image (for
grayscale) or three images (for RGB).
 Fusing all the bands together requires
heavy calculation and a lot of memory
Visualization - Hierarchical Fusion
 For the hyperspectral image cube of
dimensions (X*Y *N), vector quantization (V
Q) based fusion is applied across a contiguous
subset of dimensions (X*Y*P) to generate B =
N/P different fused images at the first stage of
 In the subsequent levels of hierarchy,
contiguous images are grouped together in a
smaller subset and fused using ’bilateral
Visualization - Hierarchical Fusion –
 Images I1 to IN from N contiguous bands are
organized into Group1 to GroupB, using
uniform grouping. So each group has P =
N/B images each of size X*Y .
 First stage - Each group is individually fused
using Vector quantization
Fused using Vector quantization
 Vector Quantization used to compress the
information and manipulate the data in a
way that maintain the most important
 Each image Ik is divided into sub-blocks of
size mxm giving rise to (XxY )/m2 image
 In a given group there are IVn = (XxYxP)/(m2)
image sub- blocks.
Generate first code-vector
 Convert these image vectors to one
dimensional vectors each of size m2 and
generate a cluster (matrix) S of size IVnx m2
 The first code-vector (CV (1,1)) of the codebook size 1, can be computed by finding the
column wise average of the entire cluster as
Generate code book
 The code-vector (CV (1,1)) is then split into
two code-vectors by adding and subtracting
a tolerance ε, in order to double the codebook size:
Generate code book – cont’
 The original cluster S is divided into two
clusters – S1 and S2 based on the distortion
D1(2,1)and D1(2,2) with respect to the codevectors
 Comparing the values D1(2,1) (k) and D1(2,2)(k)
the image vectors of the cluster S is grouped
into two sub-clusters S1 and S2 such that
Generate code book – cont’
 The quality of the code-book is enhanced by
updating existing code-vectors through
calculating the mean of the image-vectors
in each sub-cluster S1 and S2.
 The code-vectors are updated to the new
 The corresponding distortions are
calculated for the complete image vector set
S to get updated sub-clusters S1 and S2.
Generate code book – cont’
 The update repeated until the vector sum of
the distortion in the current level is
significantly less than the distortion in the
previous level
 Now we have n code-vectors, each of size
m2, in the code-book (size nxm2).
Fused using Vector quantization
 Each image Ii rearrange to a matrix of size
(XxY/m2, m2).
 The rearranged image is now compared with
all the n code-vectors with respect to MSE
 The MSE values of all the P images for a
given sub-block position with all the codevectors are then added.
 The code-vector CVi that gives the
minimum sum of MSE values is selected as
the ith sub-block of the fused image IF
Hierarchical Fusion - vector
 At the end of first stage fusion, there are B
fused images (I1,1 to I1,B) which are the input
images for second level of hierarchy.
Fusion using Bilateral Filtering
 bilateral filtering is used only from the
second hierarchical level following the
redundancy removal in the first stage
through Vector quantization.
A bilateral filter
 A bilateral filter is an edge-preserving
and noise reducing smoothing filter.
 The intensity value at each pixel in an image
is replaced by a weighted average of intensity
values from nearby pixels.
 This weight is based on a Gaussian
 This preserves sharp edges by systematically
looping through each pixel and according
weights to the adjacent pixels accordingly.
Fusion using Bilateral Filtering
 Compute the bilateral filtered image:
 Calculate the weight at each pixel (x, y) for
each image:
Fusion using Bilateral Filtering
 The fused Image of the hyperspectral cube
subset IF is given by
The 1st and the 81st image of the urban image
cube (Palo Alto) from Hyperion dataset
 Combine the high spatial and the high
spectral resolutions in order to obtain the
complete and accurate description of the
observed scene.
 The following method will be describe:
 Unmixing-based constrained nonnegative
matrix factorization (UCNMF)
Unmixing-based constrained
nonnegative matrix factorization
Nonnegative matrix factorization
(NMF) for hyperspectral unmixing
 The hyperspectral data is a 3D-array
 V ∈ RL×K store the original hyperspectral
 V = WH + N
 W ∈ RL×S - the spectral signature matrix
 H ∈ RS×K is the abundance matrix
Nonnegative matrix factorization
(NMF) for hyperspectral unmixing –
 To unmix the hyperspectral data, NMF could
be conducted:
Euc(W, H) 
V  WH
 Minimize the square of the Euclidean
distance between V and WH
ij 
Unmixing-based constrained
nonnegative matrix factorization
(UCNMF) for image fusion
 After creating the abundance matrix, the
weighted fusion method is adopted.
min F(W, H) 
V  WH
s .t ., W  0 , H  0
 Therefore, we have the fused data Vf :
Vf = W(αH + (1 − α)P)
Preserve the spectral information
of the original hyperspectral image
 The fuse image does not hold the same
spectral quality as the original hyperspectral
image causing spectral distortion .
 Constraint function:
V    V
i 1
fi V
 Which is equivalent to:
   tr V V . * V V   tr V V . * V V 
Final Fusion model
min J(W, H) = F(W, H) + βS(Vf )
s.t. W ≥ 0, H ≥ 0
Vf = W(αH + (1 − α)P)
 This is an optimization problem
Algorithm (Outline: Lin-PG for UCNMF).
Given 0 < ı < 1, 0 < < 1, 0 < ε < 1. Set γ0 = 1.
Initialize the matrices W ≥ 0, H ≥ 0.
Calculate the  J W 1, H 1
2. For k = 1, 2, . . .
(a) Assign γk ← γk-1 .
(b) If γk satisfies (1),
repeatedly increase it by γk←γk/δ until either γk does
not satisfy (1) or W, H keep the same before and after
the change of γk
Else repeatedly decrease γk by γk←γk· δ until γk satisfies (1).
(c) Update W by (2), H by (3).
(d) Calculate the  P J W k , H k 
3. Repeat step 2, until satisfying the stopping condition given
in (4).
4. Obtaining the fused image Vf = W(αH + (1 − α)P)
 proposed method UCNMF has the
advantage that it could advance the spatial
resolution of the hyperspectral image
without losing much its color information
Detection and classification
 Fusing data from hyperspectral imaging
(HSI) sensors with data from other sensors
can enhance overall detection and
classification performance.
 Fusing HSI data with foliage-penetration
synthetic aperture radar (FOPEN SAR) data
- feature level
 Fusing HSI data with high-resolution
imaging (HRI) data - data and feature level
HSI and FOPEN SAR Data Fusion
 FOPEN SAR and HSI sensors detection
capabilities complement each other.
 FOPEN SAR typically operates at 20 to 700
MHz. It penetrates foliage and detects targets
under tree canopy, but has significant clutter
returns from trees.
 HSI is capable of subpixel detection and
material identification
HSI and FOPEN SAR Data Fusion
 Both SAR and HSI systems may suffer
substantial false-alarm and missed
detection rates because of their respective
background clutter.
 Reduction in spectral dimensionality to the
HSI data cube in order to extract the
spectral features
HSI and FOPEN SAR Data Fusion
 PCA is used to decorrelate data and
maximize the information content in a
reduced number of features
 A matched-filtering algorithm with
thresholding was then applied to the HSI
data to detect all pixels of fabric nets.
HSI fabric-net detection with a matchedfiltering algorithm (left) and terrain
classification map (right).
 The map shows background classes for roads,
grass, trees, and shadow regions; these classes
result from an unsupervised data-clustering
operation that uses the first five principal
Combined FOPEN SAR-HSI Analysis and
 The SAR data processed with pixel grouping and
 Combined analyses, retained only SAR detections
from either open areas or around fabric nets
indicated in the HSI data.
 SAR detections that corresponded to
identifications of trees, far-from-open areas, or
nets on the HSI were considered false alarms.
SAR detection confirmed using HSI
material identified
 There are several strong SAR detections on the left
side of the open area.
 Three pixels match well with military gray-tan
paint, indicating the presence of a vehicle, possibly
 This match confirms the SAR detection.
HSI and HRI Data Fusion
 Sharpening the HSI data
 conduct a combined spatial-spectral
 Background classification and anomaly
detection are first obtained from HSI data.
 Applying the results to the sharpened HSI
data provides enhanced background
classification and target detection.
HSI and HRI Data Fusion – cont’
 The HRI data provide target and
background boundaries with spatial edge
 These edges, combined with results from
the sharpened HSI data, spatially enhance
the definition of targets and backgrounds.
 Finally, spectral-matched filtering for target
detection is applied to the sharpened HSI
 Shah, P.; Jayalakshmi, M.; Merchant, S.N.; Desai, U.B.; ,
"Hierarchical fusion using vector quantization for
visualization of hyperspectral images," Information Fusion
(FUSION), 2011 Proceedings of the 14th International
Conference on , vol., no., pp.1-8, 5-8 July 2011
 Z. Zhang, et al., Hyperspectral and panchromatic image
fusion using unmixing-based constrained nonnegative
matrix factorization, Optik - Int. J. Light Electron Opt.
(2012), http://dx.doi.org/10.1016/j.ijleo.2012.04.022
 Multisensor Fusion with Hyperspectral Imaging Data:
Detection and Classification
Su May Hsu , Hsiao-hua K. Burke

similar documents