### Lecture 3

```The University of
CS 4487/9587
Algorithms for Image Analysis
Image Processing Basics
Lecture 3
Lena Gorelick, substituting for Yuri Boykov
Acknowledgements: slides from Steven Seitz, Aleosha
Efros, David Forsyth, and Gonzalez & Woods
Ontario
The University of
CS 4487/9587 Algorithms for Image Analysis
Image Processing Basics

Ontario
Image Processing
• Domain Transformation
– Resize, Rotate etc.
• Range Transformation
– Point Processing
•
•
•
gamma correction
window-center correction
histogram equalization
Pixel location
g ( x , y )  f ( t x ( x , y ), t y ( x , y ))
g ( x, y )  f ( x, N  y )
Pixel Intensity
g ( x , y )  t x , y ( f ( x , y ))
g ( x, y )  f ( x, y ) / 2
Window = 800
Center = 1160
The University of
CS 4487/9587 Algorithms for Image Analysis
Image Processing Basics

Ontario
Image Processing
• Domain Transformation
– Resize, Rotate etc.
• Range Transformation
– Point Processing
•
•
•
gamma correction
window-center correction
histogram equalization
g ( x , y )  f ( t x ( x , y ), t y ( x , y ))
g ( x , y )  f ( 2 x ,2 y )
g ( x , y )  t x , y ( f ( x , y ))
g ( x, y )  f ( x, y ) / 2
– Neighborhood Processing (Filtering)
The University of
Neighborhood Processing (or filtering)

What is wrong with this image?
Courtesy of Carlo Tomasi

Courtesy of Neel Joshi
How can we remove noise?
• Look spatially around each pixel - neighborhood

Can we use Point Processing?
Readings: Forsyth & Ponce, chapters 8.1-8.2
Ontario
The University of
Neighborhood Processing (or filtering)

Lets reshuffle all pixels within the image
Has Spatial Information

Looks noisy
Point Processing will have the same effect
• Gets intensity of a single pixel as an input
• Unaware of spatial information

Neighborhood Processing
• Takes “spatial information” into account
• Images contain “spatial information”
Readings: Forsyth & Ponce, chapters 8.1-8.2
Ontario
The University of
Neighborhood Processing (filtering)
Linear image transforms

Assume 1D function


f [i ]  









Ontario
0

1

0

0
M  
0
0

0

0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
g  M  f ?
g [i ] 
 M [i , j ] f [ j ]
j
0

0

0

0
0
0

0

0
The University of
Neighborhood Processing (filtering)
Linear image transforms

Ontario
2

1

 
0
 

0
f [i ]    M  1 / 2  
 
0
 
0
 

0

0
Assume 1D function
0
0
0
0
0
0
1
0
0
0
0
0
1
1
0
0
0
0
0
0
1
0
1
1
0
1
0
0
0
0
0
0
0
1
1
0
0
0
0
0
1
1
0
0
0
0
0
1
g  M  f ?
g [i ] 
 M [i , j ] f [ j ]
j
0

0

0

0
0
0

0

1
The University of
Neighborhood Processing (filtering)
Linear shift-invariant filters
Ontario
matrix M


A commonly used pattern
Represented by a kernel
(or a mask) h  [ a b c ]
*

a

0

0
M  
0
0

0

0
*
0
0
0
0
0
b
c
0
0
0
0
a
b
c
0
0
0
0
0
a
0
b
a
c
b
0
c
0
0
0
0
0
a
b
c
0
0
0
0
a
b
0
0
0
0
0
*
0

0

0

0
0
0

0

*
(i - 1) i (i + 1)

For 2k+1 size kernel
g M  f
k
g [i ] 

uk
h [u ]  f [i  u ]
The University of
Neighborhood Processing (filtering)
2D filtering

Ontario
A 2D function f [ i , j ]
Can be filtered by a 2D filter (kernel)
To get a new image
k
g [i , j ] 
h[ u , v ]
k
 
h [u , v ]  f [i  u , j  v ]
uk vk

This is called a cross-correlation operation
Or a sliding dot-product
g  h f
The University of
Neighborhood Processing (filtering)
2D filtering

Ontario
Cross-correlation in which the filter is flipped
horizontally and vertically is called convolution
g  h* f
k
k
 
g [i , j ] 
h[  u ,  v ]  f [ i  u , j  v ]
uk vk
k

k
 
uk vk
h [u , v ]  f [i  u , j  v ]
The University of
Neighborhood Processing (filtering)
Convolution vs. Cross-Correlation

If the kernel is symmetric
h (u , v )  h (  u , v )
convolution = cross-correlation
Ontario
Neighborhood Processing
2D filtering
Noise
Types of noise:

• Salt and pepper noise
• Impulse noise
• Gaussian noise
Due to

•
•
•
•
transmission errors
specks on lens
can be specific to a sensor
The University of
Ontario
The University of
Neighborhood Processing
Practical Noise Reduction


How can we
remove noise?
Replace each pixel with
the average of a
kxk window around it
Ontario
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 100 130 110 120 110 0
0
0
0
0 110 90 100 90 100 0
0
0
0
0 130 100 104
90 130 110 0
0
0
0
0 120 100 130 110 120 0
0
0
0
0 90 110 80 120 100 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
The University of
Neighborhood Processing (filtering)
Mean filtering
Ontario
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 0 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0
0
0
0
0
0
0
0
0
0 90 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f [ x, y ]
g[ x, y ]
The University of
Neighborhood Processing (filtering)
Mean filtering
Ontario
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 10 20 30 30 30 20 10
0
0
0 90 90 90 90 90 0
0
0 20 40 60 60 60 40 20
0
0
0 90 90 90 90 90 0
0
0 30 60 90 90 90 60 30
0
0
0 90 90 90 90 90 0
0
0 30 50 80 80 90 60 30
0
0
0 90 0 90 90 90 0
0
0 30 50 80 80 90 60 30
0
0
0 90 90 90 90 90 0
0
0 20 30 50 50 60 40 20
0
0
0
0
0
0
0
0
0
0
10 20 30 30 30 30 20 10
0
0 90 0
0
0
0
0
0
0
10 10 10 0
0
0
0
0
0
0
0
0
0
0
f [ x, y ]
0
g[ x, y ]
0
0
0
The University of
Neighborhood Processing (filtering)
Effect of mean filters
Gaussian noise
Salt & Pepper noise
3x3
Side effect - blur
Ontario
5x5
7x7
The University of
Neighborhood Processing (filtering)
Mean kernel

Ontario
What’s the kernel for a 3x3 mean filter?
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 0 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0
0
0
0
0
0
0
0
0
0 90 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f [ x, y ]
h[ u , v ]
The University of
Neighborhood Processing (filtering)
Mean kernel

Ontario
What’s the kernel for a 3x3 mean filter?
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
1/9 1/9 1/9
0
0
0 90 90 90 90 90 0
0
1/9 1/9 1/9
0
0
0 90 0 90 90 90 0
0
1/9 1/9 1/9
0
0
0 90 90 90 90 90 0
0
h[ u , v ]
0
0
0
0
0
0
0
0
0
0
0
0 90 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f [ x, y ]
The University of
Neighborhood Processing (filtering)
Mean kernel

Ontario
What’s the kernel for a 3x3 mean filter?
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 0 90 90 90 0
0
1 1 1
0
0
0 90 90 90 90 90 0
0
h[ u , v ]
0
0
0
0
0
0
0
0
0
0
0
0 90 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f [ x, y ]
1 1 1
1/9
1 1 1
Equal weight to all pixels
within the neighborhood
The University of
Neighborhood Processing (filtering)
Gaussian Filtering
Ontario
A Gaussian kernel gives less weight to pixels
0 0 0 0 0 0 0 0 0 0 further from the center
0 0 0 0 0 0 0 0 0 0 of the window

0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 90 90 90 90 0
0
0
0
0 90 0 90 90 90 0
0
1 2 1
0
0
0 90 90 90 90 90 0
0
h[ u , v ]
0
0
0
0
0
0
0
0
0
0
0
0 90 0
0
0
0
0
0
0
is a discrete approximation
of a Gaussian function:
0
0
0
0
0
0
0
0
0
0
f [ x, y ]
1 2 1
1
16
 2
4 2
The University of
Neighborhood Processing (filtering)
Mean vs. Gaussian filtering
Ontario
Input image
Mean Filtering
Mean filter, 20x20
Gaussian filtering
Gaussian filter, 20x20, σ = 5
The University of
Neighborhood Processing (filtering)
Mean vs. Gaussian filtering
Ontario
The University of
Neighborhood Processing (filtering)
Gaussian Filtering



Ontario
Low-pass filter
Smooth color variation (low
frequency) is preserved
Sharp edges (high frequency) are
removed
The University of
Neighborhood Processing (filtering)
Median filters

Ontario
A Median Filter operates over a window by
selecting the median intensity in the window.
Image credit: Wikipedia – page
on Median Filter
The University of
Neighborhood Processing (filtering)
Median filters

Ontario
Is a median filter a kind of convolution?
No, median filter is an example of non-linear
filtering
Median ( f 1 ( x )  f 2 ( x ))  Median ( f 1 ( x ))  Median ( f 2 ( x ))
1

1  Median ( 1

1
1
1

Median ( 0

 1
0
3
1
1
0
1
1


1 )  Median ( 0


 1
1
1
0


0 )  Median ( 1


 0
1 
0
1
0
1
2
1
1  0
 
0  1
 
1   0
1
2
1
0

1 )

0 
0

1 )  11  2

0 
The University of
Salt&Pepper
Noise
Ontario
The University of
Gaussian
noise
Ontario
The University of
Neighborhood Processing (filtering)
Median filters
What advantage does a median filter have
over a mean filter?
Better at removing Salt & Pepper noise
Slow

Ontario
The University of
Neighborhood Processing (filtering)
Derivatives and Convolution

• Used for Edge/Corner Detection
• Computed with Finite Differences Filters

Laplacian of Gaussians (LoG) Filter
• Used for Edge/Blob Detection and
Image Enhancement
• Approximated using Difference of Gaussians
Ontario
The University of
First Derivative

Ontario
Recall Sharp changes in gray level of the input
image correspond to “peaks or valleys” of the
first-derivative of the input signal.
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
First Derivative and convolution

x
 f ( x  h, y )  f ( x, y ) 
f  lim 

h 0
h



How can we approximate it
for a discrete function?

Is this operation shift-invariant?

Is it linear?
?
?
?
?
?
?
?
?
?
 x [u , v ]
Ontario
The University of
First Derivative and convolution

Finite Difference
Ontario
f ( x  a)  f ( x  b)
x
 h[ f ]
h

f ( x  h)  f ( x)
h
 h
 f ' ( x )
0
x-h
x-h/2
x+h
x
x
x+h/2
Slide credit: Wikipedia
The University of
First Derivative and convolution

Ontario
Finite Difference – The order of an error
can be derived using Taylor Theorem
Slide credit: Wikipedia
The University of
First Derivative and convolution
Ontario
Pixel Size  x  h
 Using Finite Central Difference


x

f 
f ( x  x, y )  f ( x  x, y )
We need a kernel

x
2  x
x
such that
0 0 0
1
f  x  f
2x

1 0 -1
0 0 0
 x [u , v ]
The University of
Neighborhood Processing (filtering)
Finite differences – responds to edges
Dark = negative
White = positive
Gray = 0
x  f
Ontario
The University of
Neighborhood Processing (filtering)
Finite differences - responding to noise
x  f
x  f
Increasing noise ->
(this is zero mean additive gaussian noise)
Ontario
x  f
The University of
Neighborhood Processing (filtering)
Finite differences and noise

Ontario
Finite difference filters respond strongly to noise
• Noisy pixels look very different from their neighbours
• The larger the noise  the stronger is the response

How can we eliminate the response to noise?
• Most pixels in images look similar to their neighbours (even at an edge)
• Smooth the image (mean/gaussian filtering)
The University of
Neighborhood Processing (filtering)
Smoothing and Differentiation

Smoothing before differentiation = two convolutions
 x  (H  f )

Convolution is associative
 x  ( H  f )  ( x  H )  f
( x  H )
( y  H )
Ontario
The University of
Neighborhood Processing (filtering)
Smoothing and Differentiation
Ontario
( x  H )  f
1 pixel
3 pixels
7 pixels
The scale of the smoothing filter affects derivative estimates, and also
the semantics of the edges recovered.
The University of
Neighborhood Processing (filtering)
Sobel kernels

Ontario
Yet another approximation frequently used

x
f
1 0 -1
1
8 x


y
2 0 -2
1
8 y
f
1 2 1

0 0 0
1 0 -1
-1 -2 -1
 x [u , v ]
 y [u , v ]
The University of
Neighborhood Processing (filtering)
Ontario

Recall for a function of two variables f ( x , y )
The gradient at a point (x,y)

 fx 
 x 
 f   f   
y

  y 

||  f || 
f

f
f
f
x
y
f
y
f
( x )  ( y ) 
2
2
( x  f )  ( y  f )
2

x
2
• direction of the “steepest ascend”
• orthogonal to object boundaries in the image
small image
textured areas
The University of
Neighborhood Processing (filtering)

Ontario
Typical application where image gradients are used
is image edge detection
• find points with large image gradients
Canny edge detector suppresses
The University of
Second derivatives and convolution

Ontario
Peaks or valleys of the first-derivative of the input
signal, correspond to “zero-crossings” of the
second-derivative of the input signal.
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Second derivatives and convolution
Ontario
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Second derivatives and convolution


Ontario
Better localized edges
But more sensitive to
noise
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Second derivatives and convolution
Ontario
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Second Image Derivatives

 f    f   f
2
Laplace operator
 f
2
f 
x
2
 f
2

y
2
Ontario

[ x



x

]     f   f
y
 y 
“divergence
rotationally invariant
second derivative for 2D functions
0 0 0
-1 2 -1

0 -1 0
0 2 0
0 0 0
0 -1 0
Finite Difference
Second Order
Derivative in x
Finite Difference
Second Order
Derivative in y

0 -1 0
-1 4 -1
0 -1 0
The University of
Neighborhood Processing (filtering)
Second Image Derivatives
Ontario
 f
2
 f
2

Laplacian Zero Crossing  f 

Used for edge detection
x
2

y
2
image f
intensity
magnitude of ||
http://homepages.inf.ed.ac.uk/rbf/HIPR2/zeros.htm
Laplacian
(2nd derivative)
of the image
 f ||
f
The University of
Neighborhood Processing (filtering)
Laplacian Filtering
Ontario
-Zero on uniform regions
-Positive on one side of an edge
-Negative on the other side
-Zero at some point in between
on the edge itself
 band-pass filter (Suppresses both high and low frequencies)
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Laplacian of a Gaussian (LoG)

Smooth before differentiation
(remember associative property of convolution)
LoG ( x , y )  
1


x  y  
1 
 e
2
2


2
4
2
2
x y
2
Ontario
  G  LoG
2
2
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Laplacian of a Gaussian (LoG)
Ontario
Suppresses both high and low frequencies
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Laplacian of a Gaussian (LoG)
Ontario
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Laplacian of a Gaussian (LoG)
Ontario
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
Laplacian of a Gaussian (LoG)

Ontario
Can be approximated by a difference of two
Gaussians (DoG)
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
DoG vs. LoG


Ontario
Separable (product decomposition)  more efficient
Can explain band-pass filter since Gaussian is a lowpass filter.
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
LoG for Blob Detection
Ontario

Cross correlation with a filter can be viewed as
comparing a little “picture” of what you want to find
against all local regions in the image.

Scale of blob (size ; radius in pixels)
is determined by the sigma parameter
of the LoG filter.
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Neighborhood Processing (filtering)
LoG for Blob Detection
Ontario
http://www.cse.psu.edu/~rcollins/CSE486/lecture11_6pp.pdf
The University of
Filters and Templates

Ontario
Recall: filtering = cross-correlation = dot-product
f(x,y)
(x,y)
h* f
g(x,y)
(x,y)
Correlation
Input Image
f w f w ( x, y )
  arccos
h  fw
|| h |||| f w ||
Output Image
g ( x , y )  cos(  )


Measures the angle between two vectors:
the filter and the image window 
Measures visual similarity
The University of
Neighborhood Processing (filtering)
Template Matching


Ontario
Since we divide by the norm
Normalized cross-correlation
f w f w ( x, y )
  arccos
h  fw
|| h |||| f w ||
Bring the template and the image to a uniform scale:
subtract the mean and divide by the variance
h( x, y )
f ( x, y )
http://scien.stanford.edu/pages/labsite/2008/psych221/projects/08/MariaJabon/index.htm
The University of
Neighborhood Processing (filtering)
Template Matching

Normalized cross-correlation
Ontario
f w f w ( x, y )
  arccos
h  fw
|| h |||| f w ||

Extremely time consuming due to sliding window
g  cos(  )
f
| g | 1
h
Slide credit: Matlab manual
The University of
Neighborhood Processing (filtering)
Template Matching
Ontario
g  cos(  )
h
f
Slide credit: OpenCV manual
```