### Toward 0-norm Reconstruction, and A Nullspace Technique

```Toward 0-norm Reconstruction,
and A Nullspace Technique for
Compressive Sampling
Christine Law
Gary Glover
Dept. of EE, Dept. of Radiology
Stanford University
Outline

0-norm Magnetic Resonance Imaging (MRI)
reconstruction.
Homotopic
 Convex iteration




Signal separation example
1-norm deconvolution in fMRI.
Improvement in cardinality constraint problem
with nullspace technique.
Shannon vs. Sparse Sampling

Nyquist/Shannon : sampling rate ≥ 2*max freq
Sparse Sampling Theorem: (Candes/Donoho 2004)
n
n
Suppose x in R is k-sparse and we are given m
Fourier coefficients with frequencies selected
uniformly at random. If
m ≥ k log2 (1 + n / k)
then x  arg min x 1 s.t. x = y
reconstructs x exactly with overwhelming probability.

k=2
Rat dies 1 week after drinking poisoned wine
Example by Anna Gilbert
x1
y1
y2
y3
x2
x3
x4
x5
x6
x7
 1 0 0 1 1 01
 0 1 0 1 0 11



 0 0 1 0 1 11

k=1
n=7
m=3
x1
y1
y2
y3
x2
x3
x4
x5
x6
x7
 1 0 0 1 1 01
 0 1 0 1 0 11



 0 0 1 0 1 11

Reconstruction by Optimization

Compressed Sensing theory (2004 Donoho, Candes):
under certain conditions,
y are measurements (rats)
x are sensors (wine)
x  arg min x 1 s.t. x = y

x  arg min x 0 s.t. x = y
Candes et al. IEEE Trans. Information Theory 2006 52(2):489
Donoho. IEEE Trans. Information Theory 2006 52(4):1289
0-norm reconstruction

Try to solve 0-norm directly.
u  arg min u 0 s.t. u = y

For p-norm, where 0 < p < 1
Chartrand (2006) demonstrated fewer samples y
required than 1-norm formulation.
Chartrand. IEEE Signal Processing Letters. 2007: 14(10) 707-710.
u  arg min u 0 s.t. u = y
min lim    u ,  s.t. u  y
u  0
i
i
where  is tanh, laplace, log, etc. such that
lim    u ,  
 0
i
i

u
0
Trzasko et al. IEEE SP 14th workshop on statistical signal processing. 2007. 176-180.
Homotopic function  in 1D
Laplace function: 1 e x

 1

  1000
Start as 1-norm problem, then reduce  slowly
and approach 0-norm function.
Homotopic method

min  E   
u 
i
 u ,   2 u  y

i


2

2
Find minimum of E by zeroing Lagrangian gradient:
 u E   T   u ,    u   H   u  y  = 0
 u ,  
 d  d u
 
u




 is diag
    u ,       u   y
u      u ,        y
T
t 1
T
t
1
Iterate Conjugate Gradient method until u
It's a contraction.
t 1
 u.
t
Demonstration

when  is big (1st iteration), solving 1-norm problem.
reduce  to approach 0-norm solution.
original
x∆

u : image to solve
y : k-space measurements (undersampled)
: Total Variation (TV) operator
x 0  61%
x 0  2%
Example 1
original
error
subsampled
 original phantom - reconstruction
20 log 

original phantom F

F



Homotopic result:
use 4% Fourier data
error: -66.2 dB
85 seconds
homotopic
recon
Zero-filled Reconstruction
1-norm
recon
1-norm result:
use 4% Fourier data
error: -11.4 dB
542 seconds
Example 2


Angiography
original
reconstruction
homotopic method:
error: -26.5 dB,
101 seconds
original
27.5% samples
360x360
reconstruction
1-norm method:
error: -24.7 dB,
1151 seconds
Convex Iteration
u  arg min u
0
s.t. u = y


2


min  E  u ,   u  y 2 
2
u 

i 0,1
Chretien, An Alternating l1 approach to the compressed sensing problem, arXiv.org .
Dattorro, Convex Optimization & Euclidean Distance Geometry, Meboo.
Convex Iteration

2


min  E  u ,   u  y 2 
2
u 

Find minimum of E by zeroing Lagrangian gradient:
u E   T   sgn  u     H  u - y   0
u
sgn  u  
u
      u 
T

1


   u    y
1
u t 1    T     u t
      y


t 1
t
Iterate Conjugate Gradient method until u  u .
It's a contraction.
1
Convex Iteration demo
use 4% Fourier data
zero-filled reconstruction
error: -104 dB
96 seconds
reconstruction
Signal Separation by Convex Iteration
  
u  u   u D   D   
 D 
-1
-1
 is Total Variation operator
D is Discrete Cosine Transform matrix
y is inverse measurement in domain F
: y  P(u   u D )
P := real F T M F
w
F: : Gram-Schmidt orthogonalization of Fourier matrix
rank P = rank M
want to find u  , u D
1-norm formulation
minimize
 ,D
s.t.
 1 D
1

y  P   -1 D -1  
 D 
as convex iteration
minimize
 ,D
s.t.
  T

 DT  , 

y  P   -1 D -1  
 D 
60
Signal Construction
u
40
20
u    -1  
0
u D  D -1 D
-20
-40
-60
0
50
100
=
Cardinality(steps) = 7
Cardinality(cosine) = 4
150
200
250
300
60
60
40
u∆
40
20
20
0
-40
-60
-20
-80
-40
50
100
+
0
150
200
4
2
0
250
50
100
150
200
250
100
150
200
250
300
10
uD
6
0
-10
D
-20
0
-30
-2
-40
-4
-6

-20
0
-50
0
50
100
150
200
250
0
300
50
Minimum Sampling Rate
m/k
m measurements
k cardinality
n record length
k/n
Donoho, Tanner, 2005. Baron, Wakin et al., 2005
60
Signal Separation
by Convex Iteration
u
40
Cardinality(steps) = 7
Cardinality(cosine) = 4
20
0
-20
35
-40
30
-60
25
20
15
10
5
0
50
100
150
MF u
200
250
m = 28 measurements
m/n = 0.1 subsample
m/k = 2.5 sample rate
k/n = 0.04 sparsity
300
error = -23.189286 dB
100
1-norm
50
orig signal
L1 est signal
-241dB reconstruction error
0
-50
0
50
100
150
200
250
300
error = -241.880696 dB
100
50
Convex iteration
orig signal
est signal
0
-50
0
50
100
150
200
250
300
error = -14.488360 dB
100
60
TV only
50
orig signal
est signal using TV alone
40
u∆
0
-50
20
0
50
100
150
200
error = -6.648319 dB
DCT only
-20
0
-50
-40
0
50
100
150
300
0
100
50
250
piecewise
est piecewise
orig signal
est signal using DCT alone
0
200
50
250
100
150
200
250
300
300
6
cosine
est cosine
4
2
uD
0
-2
-4
-6
0
50
100
150
200
250
300
functional Magnetic Resonance Imaging
(fMRI)
Haemodynamic Response Function
(HRF)
How to conduct fMRI?
Stimulus Timing
Huettel, Song, Gregory. Functional Magnetic Resonance Imaging. Sinauer.
How to conduct fMRI?
Stimulus Timing
What does fMRI measure?
Neural activity
Signalling
Vascular response
Vascular tone (reactivity)
Autoregulation
Synaptic signalling
BOLD signal
Blood flow,
oxygenation
and volume
arteriole
B0 field
glia
Metabolic signalling
End bouton
dendrite
venule
fMRI signal origin
rest
Oxygenated Hb
Deoxygenated Hb
Oxygenated Hb
Deoxygenated Hb
Haemodynamic Response Function
(HRF)
Canonical HRF
Stimulus Timing
Predicted Data

=

=
Which part of the brain is activated?
Actual measurement
Signal Intensity
Prediction
Time
Time
http://www.fmrib.ox.ac.uk/
Variability of HRF
Stimulus Timing
Measurement
Actual HRF

HRF calibration
=
Deconvolve HRF h
minimize
Wh 1   y  Dh
h
s.t.
h
3
1
Discrete wavelet transform

canonical HRF
deconvolved HRF
1
E h0
1
0.5
0
T
W: Coiflet
E: monotone cone
Wh
2
h(1)  h(n)  0
(d)
32
100
200
300
400
500
100
200
300
400
500
100
200
300
400
500
100
200
300
400
500
100
200
300
400
500
100
100
200
200
300
300
400
400
500
500
10
110
D(: , 1)
0.5
-10
01
0
2
D: convolution
matrix
10
0.5
1
Dh
01
00
0.5
-10
10100
100
100
200
200
300
300
y
400
400
500
500
10
0 02
0
-10
-10
-10
1
Deconvolve HRF h
minimize
h
Wh 1   y  Dh
subject to  h
3
1

2
smoothness
h(1)  h( n)  0
E h0
T
W: Coiflet wavelet
E: monotone cone
D: convolution matrix
1
D( : , 1)
0.5
11
stimulus
timing
0.511
0
0
0
10
10 00
0.5
0.5
10
10
100
200
300
100
200
300
100
100
200
200
300
300
400
y
500
400
500
400
400
500
500
10
measurement
0 000
0
-10
-10
-10
-10
-10
2
2
21
2
1
0
1
100
200
300
400
500
100
100
200
200
300
300
400
400
500
500
Dh
Deconvolution results

20 log 10  hcanonical  h / h canonical
1.2

1.5
High noise simulation: SNR = -10dB
Low noise simulation: SNR= 6dB
1
impulse (evenly spaced)  = -6.7dB
 = -4.4dB
impulse (jittered)
 = -3.5dB
Block
Canonical HRF
1
0.8
impulse (evenly spaced)  = -10.6dB
 = -6.2dB
impulse (jittered)
 = -11.8dB
Block
Canonical HRF
0.6
0.4
0.5
0
0.2
0
-0.5
-0.2
-1
-0.4
-0.6
0
5
10
15
20
25
30
35
-1.5
0
5
10
15
20
25
30
35
in vivo deconvolution results
HRF calibration
HRF deconvolution
HRF deconvolution
…
…
…
file: C:\Documents and Settings\THE SHEEP\My Documents\rawdata\Nov11/measure_hrf/hrf roi loc: 38 48 13 22 5:
1
0.5
0
0
-0.5
-0.5
0
0.5
ROI value
ROI value
1
5
10
15
20
frame
Time (s)
frame
25
30
0
5
10
15
frame
Time (s)
20
25
30
Cardinality Constraint Problem
1
3
5
b
1
A
3
5
A=
0.29
 0.75

0.82
82
0.
0.47
0.51
0.41
0.62
0.89
0.20
0.16
0.52
0.86
0.24 
0.42 
0.79
0.79
b = A(:,2) * rand(1) + A(:,5) * rand(1)
x
Find x with desired cardinality
e.g. k = 2, want
0
 
 x2 
x = A\b   0 
 
 x4 
x 
 5
0
 
 x2 
x0
 
0
 x5 
 
want x
† b   x1 

A
x  arg min x
 
2
x2 

s.t.
Ax  b
 x3 
 
 x4 
x 
 5
0
2
0
 
x2 
x  arg min x

=
1
0
s.t.
Ax  b
 
 x4 
x 
 5
From the Range perspective …
1
3
b
A
m=3
Check every pair for k = 2
Possible # solution:
5
 
 2
In general,
1
n=5
n
n!
 
 k  k !(n  k )!
5
x
Nullspace Perspective
5
2
Z 1T
3
A
Z 2T
ZZ
T
3
=0
Z 4T
5
Particular soln.
xp = A \ b
General soln.
x  Z  xp
Z 5T
xi  Zi   xpi  0
T
Ax  A( Z  xp )  Axp  b
Z=
0.57 0.34
-0.03 -0.26
-0.16 0.02
-0.68 -0.63
0.22 0.47
xp =
0.34
1.11
-0.30
0
0
0.57 1 + 0.34  2 + 0.34  0
-0.03 1 - 0.26  2  1.11  0
-0.16 1 + 0.02  2  0.30  0
-0.68 1 - 0.63  2 
00
0.22 1 + 0.47  2 
2
6
5
4

3
xi  Zi   xpi
T
i  1...5
00
2
1
0
-1
-4
1
-2
0
2
How to find intersection of lines?
 Z 


 

 T 
  Zn  


T
1
Sum of normalized wedges
1
 Z   xp 
1
4
0
-4
-5
0
5
0
 
 x2 
x0
 
0
 x5 
 
Z=
-0.59 -0.36
0.06 -0.55
0.15 0.36
0.74 -0.08
-0.27 0.66
x=
0.34
1.11
-0.30
0
0
   (0, 0)
Znew = Z * randn(2);
8
Znew =
-0.41
-0.80
0.48
-0.26
1.00
6
4
2
0
-2
-1
0
-0.18
-0.30
0.19
-0.07
0.37
   (-1.16, 4.51)
x=
0
0.68
0
0
0.51
Summary

Ways to find 0-norm solutions other than
1-norm (homotopic, convex iteration)
fewer measurements
 faster


In cardinality constraint problem, convex
iteration and nullspace technique success more
often than 1-norm.
```