### Single Image Super Resolution

```SINGLE IMAGE
SUPER RESOLUTION
Seminar presented by: Tomer Faktor
Advanced Topics in Computer Vision (048921)
12/01/2012
OUTLINE
• What is Image Super Resolution (SR)?
• Prior Art
• Today – Single Image SR:
 Using patch recurrence [Glasner et al., ICCV09]
 Using sparse representations [Yang et al., CVPR08],
[Zeyde et al., LNCS10]
1
FROM HIGH TO LOW RES. AND BACK
yh
zl  SH y h
Blur (LPF)
+
Down-sampling
Inverse problem
2
WHAT IS IMAGE SR?
• Inverse problem – underdetermined
• Image SR - how to make it determined:
Type
Multi-Image
How
Set of low res. images
Example-Based External database of high-low
res. pairs of image patches
Single-Image
Image model/prior
3
GOALS OF IMAGE SR
• Always:
 More pixels - according to the scale factor
 Low res. edges and details - maintained
• Bonus:
 Blurred edges - sharper
 New high res. details missing in the low res.
4
MULTI-IMAGE SR
“Classical” [Irani91,Capel04,Farisu04]
Fusion
5
EXAMPLE-BASED SR
[Freeman01, Kim08]
Blur kernel & scale factor
Example-based
SR (Image
Hallucination)
Algorithm
External database of
low and high res.
image patches
6
SINGLE IMAGE SR
Blur kernel & scale factor
Single-Image
SR (Scale-Up)
Algorithm
Image model/prior
7
PRIOR ART
• No blur, only down-sampling  Interpolation:
 LSI schemes – NN, bilinear, bicubic, etc.
 Spatially adaptive and non-linear filters
[Zhang08, Mallat10]
• Blur + down-sampling:
Interpolation
Deblurring
8
PRIOR ART
• LS problem: yˆ  argm in S H y  z l
y
2
• Add a regularization:
 Tikhonov regularization, robust statistics, TV, etc.
 Sparsity of global transform coefficients
 Parametric learned edge model [Fattal07, Sun08]
9
SUPER RESOLUTION FROM A
SINGLE IMAGE
D. Glasner, S. Bagon and M. Irani
ICCV 2009
BASIC ASSUMPTION
Patches in a single image tend
to redundantly recur many
times within and across scales
11
STATISTICS OF PATCH RECURRENCE
The impact of SR is
mostly expressed
here
12
SUGGESTED APPROACH
• Patch recurrence  A single image is enough for SR
• Adapt ideas from classical and example-based SR:
 SR linear constraints – in-scale patch
redundancy instead of multi-image
 Correspondence between low-high res.
Unified
framework
patches - cross-scale redundancy instead
of external database
13
EXPLOITING IN-SCALE PATCH REDUNDANCY
• In the low res. – find K NN for each pixel
• Compute their subpixel alignment
• Different weight for each linear SR constraint (patch similarity)
14
EXPLOITING CROSS-SCALE REDUNDANCY
Unknown,
increasing res.
by factor α
Ik  H
Copy
I1
Copy
I0  L
Known,
decreasing res.
by factor α
NN
I 1
NN
Parent
Parent
Ik
15
ALGORITHMIC SCHEME
16
IMPORTANT IMPLEMENTATION DETAILS
• Coarse-to-fine – gradual increase in resolution, for
numerical stability, improves results
• Back-projection [Irani91] – to ensure consistency of the
recovered high res. image with the low res
• Color images – convert RGB to YIQ:
 For Y – the suggested approach
 For I,Q – bicubic interpolation
17
RESULTS – PURELY REPETITIVE STRUCTURE
Input low
res. image
2
18
Suggested
Bicubic Interpolation
Approach
19
RESULTS – PURELY REPETITIVE STRUCTURE
Input low
res. image
Bicubic
interp.
Only
in-scale
redundancy
In and
cross-scale
redundancies
High res. detail
20
RESULTS – TEXT IMAGE
Suggested
Groundapproach
Truth
Bicubic
Interp.
Input low
res. image
3
Small digits recur
only in-scale and
cannot be recovered
21
RESULTS – NATURAL IMAGE
Edge
modelApproach
[Fattal07]
Suggested
Example-based
[Kim08]
Bicubic
Interp.
Input low
res. image
4
22
PAPER EVALUATION
 Reasonable assumption – validated empirically
 Very novel – new SR framework, combining two widely
used approaches in the SR field
 Solution technically sound
 Well written, very nice paper!
 Many visual results (nice webpage)
23
PAPER EVALUATION

Not fully self-contained – almost no details on:
 Subpixel alignment
 Weighted classical SR constraints
 Back-projection
 No numerical evaluation (PSNR/SSIM) of the results
 No code available
 No details on running time
24
ON SINGLE IMAGE
SCALE-UP USING
SPARSE-REPRESENTATIONS
R. Zeyde, M. Elad and M. Protter
Curves & Surfaces, 2010
SPARSE-LAND PRIOR
m
• Widely-used signal
model with numerous
n
applications
d
• Efficient algorithms
for pursuit and
dictionary learning

y
Dictionary
A
Non-zeros
n
q
signal
sparse
vector
26
BASIC ASSUMPTIONS
• Sparse-land prior for image patches
• Patches in a high-low res. pair have the same sparse
representation over a high-low res. dictionary pair
p h  A hq
k
Same procedure
for all patches
k
p l  A lq
k
[Yang08]
&
[Zeyde10]
k
Joint training!
27
JOINT REPRESENTATION - JUSTIFICATION
• Each low res. patch is generated from the high res. using
the same LSI operator (matrix):
pl  Lph , k
k
k
• A pair of high-low res. dictionaries with the correct
correspondence leads to joint representation of the patches:
p h  A hq  v h  p l   L A h  q  L v h  A lq  v l
k
k
k
k
k
28
RESEARCH QUESTIONS
• Pre-processing?
• How to train the dictionary pair?
• What is the training set?
• How to utilize the dictionary pair for SR?
[Yang08]
&
[Zeyde10]
• Post-processing?
29
PRE-PROCESSING – LOW RES.
Low Res. Image
1
Bicubic
Interpolation
1
0
-1
0.5
0
-0.5
-1
Feature
Extraction
by HPFs
2
30
PRE-PROCESSING – LOW RES.
Projection
onto reduced
PCA basis
Dimensionality
Reduction
via PCA
Features for
low res.
patches
nl
3
p l
k

k 
31
PRE-PROCESSING – HIGH RES.
yl
yh
p 
k
h
zl
SH
k 
Bicubic
Interpolation
32
TRAINING PHASE
p
k
h
,pl
k

k 
1
2
q
k

Al,Ah
k 
Training the
Dictionary Pair
33
TRAINING THE LOW RES. DICTIONARY
1
p 
K-SVD
Dictionary
Learning
k
l
A l , q
k 
k

k 
 argm in
 
Al , q
k
k 

k 
p l  A lq
k
Al
q 
k
k
2
s.t. q
k 
3
k
0
2
34
TRAINING THE HIGH RES. DICTIONARY
1

Ph  p h

argm in
Ah

k
Ph  A h Q
ph



1

Q  q

2
F
Ph Q
†
q



2
Ah
35
WHICH TRAINING SET?
• External set of true high res. image (off-line training):
 Generate low res. image
k
k
 Collect  p h , p l 
k 
• The image itself (bootstrapping):
 Input low res. image = “high res.”
 Proceed as before…
SH
36
RECONSTRUCTION PHASE
yl
Low Res.
Image
Pre-processing
p 
k
l
Al
Ah
k 
OMP
q
k

k 
Image
Reconstruction
SR Image
37
IMAGE RECONSTRUCTION
k
k
pˆ h  A h q ,  k  


ˆy h  y l    R k T R k 
 k 

1

T
k
R k pˆ h
k 
No need to perform back-projection!
38
RESULTS – TEXT IMAGE
Suggested
Bicubic
Interp.
GroundApproach
Truth
Input low
res. image
Dictionary pair
learned off-line
from another
text image!
3
PSNR=14.68dB
PSNR=16.95dB
39
NUMERICAL RESULTS – NATURAL IMAGES

1
3
Bicubic Interp.
Yang et al.
Suggested Alg.
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
Barbara
26.24
0.75
26.39
0.76
26.77
0.78
Coastguard
26.55
0.61
27.02
0.64
27.12
0.66
Face
32.82
0.80
33.11
0.80
33.52
0.82
Foreman
31.18
0.91
32.04
0.91
33.19
0.93
Lenna
31.68
0.86
32.64
0.86
33.00
0.88
Man
27.00
0.75
27.76
0.77
27.91
0.79
Monarch
29.43
0.92
30.71
0.93
31.12
0.94
Pepper
32.39
0.87
33.33
0.87
34.05
0.89
PPT3
23.71
0.87
24.98
0.89
25.22
0.91
Zebra
26.63
0.79
27.95
0.83
28.52
0.84
Average
28.76
0.81
29.59
0.83
30.04
0.85
3
Dictionary pair learned off-line from a set of training images!
40
VISUAL RESULTS – NATURAL IMAGES
Suggested
Bicubic
Approach
Yang etInterp.
al.
Ground
Truth
Artifact
41
COMPARING THE TWO APPROACHES
Glasner et al.
Single Image SR
Basic assumptions
Zeyde et al.
Patch recurrence
Sparse land prior,
joint representation
for high-low res.
Multi-patch constraints Subpixel alignment
Overlaps
High-low res.
Across-scale
Learned
correspondence
redundancy
dictionary-pair
Image pre-processing
Coarse-to-fine
Image post-processing
42
VISUAL
COMPARISON
Glasner
et Approach
al.
Suggested
Off-line
Bicubic
Interp.
On-line
Input low
res. image
3
Appears in
Zeyde et al.
43
Doesn’t
appear in
Zeyde et al.
Input low
res. image
3
Suggested
Off-line
Approach
Glasner
et al.
On-line
Bicubic
Interp.
44
PAPER EVALUATION
 Reasonable assumptions – justified analytically
 Novel – similar model as Yang et al., new algorithmic
framework, improves runtime and image SR quality
 Solution technically sound
 Well written and self-contained
 Code available
 Performance evaluation – both visual and numerical
45
PAPER EVALUATION

Experimental validation – not full:
 Comparison to other approaches – main focus on bicubic
interp. and Yang et al.
 No comparison between on-line and off-line learning of
the dictionary pair on the same image
 Only “good” results are shown, running the code
reveals weakness with respect to Glasner et al.
46
FUTURE DIRECTIONS
• Extending the first approach to video: Space-time SR from
a single video [Shahar et al., CVPR11]
• Merging patch redundancy + sparse representations in the
spirit of non-local K-SVD [Mairal et al., ICCV09]
• Coarse-to-fine also for the second approach – to improve
numerical stability and SR quality
47
```