compressive-nonsensing-may2014

Report
compressive
nonsensing
Richard Baraniuk
Rice University
Chapter 1
The Problem
challenge 1
data too expensive
Case in Point: MR Imaging
• Measurements
very expensive
• $1-3 million
per machine
• 30 minutes
per scan
Case in Point: IR Imaging
challenge 2
too much data
Case in Point: DARPA ARGUSIS
• 1.8 Gpixel image sensor
– video rate output:
444 Gbits/s
– comm data rate:
274 Mbits/s
factor of 1600x
way out of reach of
existing compression
technology
• Reconnaissance
without conscience
– too much data to transmit to a ground station
– too much data to make effective real-time decisions
Chapter 2
The Promise
innovation 1
sparse signal
models
Sparsity
large
wavelet
coefficients
pixels
wideband
signal
samples
frequency
(blue = 0)
large
Gabor (TF)
coefficients
time
Sparsity
pixels
large
wavelet
coefficients
(blue = 0)
nonlinear
signal
model
sparse
signal
nonzero
entries
innovation 2
dimensionality
reduction for
sparse signals
Dimensionality Reduction
• When data is sparse/compressible, can directly
acquire a compressed representation with
no/little information loss through
linear dimensionality reduction
measurements
sparse
signal
nonzero
entries
Stable Embedding
• An information preserving projection
preserves
the geometry of the set of sparse signals
K-dim subspaces
• SE ensures that
Stable Embedding
• An information preserving projection
preserves
the geometry of the set of sparse signals
• SE ensures that
Random Embedding is Stable
• Measurements
= random linear combinations
of the entries of
• No information loss for sparse vectors
measurements
whp
sparse
signal
nonzero
entries
innovation 3
sparsity-based
signal recovery
Signal Recovery
• Goal: Recover signal
from measurements
• Problem: Random
projection
not full rank
(ill-posed inverse problem)
• Solution: Exploit the sparse/compressible
geometry of acquired signal
• Recovery via (convex) sparsity
penalty or greedy algorithms
[Donoho; Candes, Romberg, Tao, 2004]
Signal Recovery
• Goal: Recover signal
from measurements
• Problem: Random
projection
not full rank
(ill-posed inverse problem)
• Solution: Exploit the sparse/compressible
geometry of acquired signal
• Recovery via (convex) sparsity
penalty or greedy algorithms
[Donoho; Candes, Romberg, Tao, 2004]
“Single-Pixel” CS Camera
scene
single photon
detector
DMD
image
reconstruction
or
processing
DMD
random
pattern on
DMD array
w/ Kevin Kelly
“Single-Pixel” CS Camera
scene
single photon
detector
image
reconstruction
or
processing
DMD
DMD
random
pattern on
DMD array
…
• Flip mirror array M times to acquire M measurements
• Sparsity-based recovery
Random Demodulator
• Problem: In contrast to Moore’s Law, ADC
performance doubles only every 6-8 years
• CS enables sampling near signal’s (low)
“information rate” rather than its (high) Nyquist rate
A2I
sampling
rate
number of
tones /
window
Nyquist
bandwidth
Example: Frequency Hopper
• Sparse in time-frequency
Nyquist rate sampling
spectrogram
20x sub-Nyquist
sampling
sparsogram
challenge 1
data too expensive
means fewer
expensive
measurements
needed for the same
resolution scan
challenge 2
too much data
means we compress
on the fly as we
acquire data
2004—2014
9797 citations
6640 citations
dsp.rice.edu/cs archive
>1500 papers
nuit-blanche.blogspot.com
> 1 posting/sec
Chapter 3
The Hype
CS is Growing Up
Gerhard Richter
4096 Colours
muralsoflajolla.com/roy-mcmakin-mural
“L1 is the new L2”
- Stan Osher
Exponential Growth
?
Chapter 4
The Fallout
“L1 is the new L2”
- Stan Osher
CS for “Face Recognition”
From: M. V.
Subject: Interesting application for compressed sensing
Date: June 10, 2011 at 11:37:31 PM EDT
To: [email protected], [email protected]
Drs. Candes and Romberg,
You may have already been approached about this, but I feel I should say
something in case you haven't. I'm writing to you because I recently read an
article in Wired Magazine about compressed sensing
I'm excited about the applications CS could have in many fields, but today I was
reminded of a specific application where CS could conceivably settle an area
of dispute between mainstream historians and Roswell UFO theorists. As
outlined in the linked video below, Dr. Rudiak has analyzed photos from 1947 in
which a General Ramey appears holding a typewritten letter from which Rudiak
believes he has been able to discern a number of words which he believes
substantiate the extraterrestrial hypothesis for the Roswell Incident). For
your perusal, I've located a "hi-res" copy of the cropped image of the letter in
Ramey's hand.
I hope to hear back from you. Is this an application where compressed
sensing could be useful? Any chance you would consider trying it?
Thank you for your time,
M. V.
x
Chapter 5
Back to Reality
Back to Reality
• “There's no such thing as a free lunch”
• “Something for Nothing” theorems
• Dimensionality reduction
is no exception
• Result:
Compressive
Nonsensing
Nonsense 1
Robustness
Measurement Noise
• Stable recovery
with additive
measurement noise
• Noise is added to
• Stability:
noise only mildly amplified
in recovered signal
Signal Noise
• Often seek recovery
with additive
signal noise
• Noise is added to
• Noise folding: signal noise amplified in
by
3dB for every doubling of
• Same effect seen in classical “bandpass subsampling”
[Davenport, Laska, Treichler, B 2011]
Noise Folding in CS
CS recovered signal SNR
slope = -3
“Tail Folding”
• Can model compressible
(approx sparse) signals as
“signal” + “tail”
• Tail “folds” into
signal as
increases
[Davies, Guo, 2011;
Davenport, Laska, Treichler, B 2011]
“signal”
“tail”
sorted index
All Is Not Lost – Dynamic Range
• In wideband ADC apps
• As amount of subsampling
grows, can employ
an ADC with a lower
sampling rate and hence
higher-resolution quantizer
Dynamic Range
stated number of bits
• CS can significantly boost the ENOB
of an ADC system for sparse signals
CS ADC w/ sparsity
conventional ADC
log sampling frequency
Dynamic Range
• As amount of subsampling
grows, can employ
an ADC with a lower
sampling rate and hence
higher-resolution quantizer
• Thus dynamic range of CS ADC can significantly
exceed Nyquist ADC
• With current ADC trends, dynamic range gain is
theoretically 7.9dB for each doubling in
Dynamic Range
dynamic range
slope = +5 (almost 7.9)
Tradeoff
SNR: 3dB loss for
each doubling of
Dynamic Range:
up to 7.9dB gain for
each doubling of
Adaptivity
• Say we know the
locations of the
non-zero entries
in
• Then we boost
the SNR by
• Motivates adaptive
sensing strategies
that bypass the
noise-folding tradeoff
[Haupt, Castro, Nowak, B 2009;
Candes, Davenport 2011]
columns
’
Nonsense 2
Quantization
CS and Quantization
• Vast majority of work in CS assumes the
measurements are real-valued
• In practice, measurements must be quantized
(nonlinear)
• Should measure CS performance in terms of
number of measurement bits
rather than
number of (real-valued) measurements
• Limited progress
– large number of bits per measurement
– 1 bit per measurement
CS and Quantization
N=2000, K=20, M = (total bits)/(bits per meas)
12 bits/meas
10 bits
8 bits
6 bits
1 bit
4 bits
2 bits
Nonsense 3
Weak Models
Weak Models
• Sparsity models in CS emphasize discrete bases
and frames
– DFT, wavelets, …
• But in real data acquisition problems, the world is
continuous, not discrete
The Grid Problem
• Consider “frequency sparse” signal
– suggests the DFT sparsity basis
• Easy CS problem:
K=1
frequency
• Hard CS problem:
K=1
frequency
slow decay due to sinc
interpolation of off-grid
sinusoids
(asymptotically, signal
is not even in L1)
Going Off the Grid
• Spectral CS
[Duarte, B, 2010]
– discrete formulation
• CS Off the Grid
[Tang, Bhaskar, Shah, Recht, 2012]
– continuous formulation
best case
Spectral CS
20dB
average case
worst case
Nonsense 4
Focus on
Recovery
Misguided Focus on Recovery
• Recall the data deluge problem
in sensing
– ex: large-scale imaging, HSI, video,
ultrawideband ADC,
– data ambient dimension N too large
• When N ~ billions, signal recovery
becomes problematic, if not
impossible
• Solution: Perform signal
exploitation directly on the
compressive measurements
Compressive Signal Processing
• Many applications involve signal inference
and not reconstruction
detection < classification < estimation < reconstruction
• Good news:
CS supports efficient learning,
inference, processing directly
on compressive measurements
• Random projections ~ sufficient statistics
for signals with concise geometrical structure
Classification
• Simple object classification problem
– AWGN: nearest neighbor classifier
• Common issue:
– L unknown articulation parameters
• Common solution: matched filter
– find nearest neighbor under all articulations
CS-based Classification
• Target images form a low-dimensional
manifold as the target articulates
– random projections preserve information
in these manifolds if
• CS-based classifier: smashed filter
– find nearest neighbor under all articulations
under random projection [Davenport, B, et al 2006]
Smashed Filter
• Random shift and rotation (L=3 dim. manifold)
• White Gaussian noise added to measurements
• Goals:
identify most likely shift/rotation parameters
more noise
number of measurements M
classification rate (%)
avg. shift estimate error
identify most likely class
more noise
number of measurements M
Frequency Tracking
• Compressive Phase Locked Loop (PLL)
– key idea: phase detector in PLL computes inner product
between signal and oscillator output
– RIP ensures we can compute this inner product between
corresponding low-rate CS measurements
CS-PLL w/ 20x
undersampling
Nonsense 5
Weak
Guarantees
Performance Guarantees
• CS performance guarantees
– RIP, incoherence, phase transition
• To date, rigorous results only
for random matrices
– practically not useful
– often pessimistic
• Need rigorous guarantees for non-random,
structured sampling matrices with fast algorithms
– analogous to the progress in coding theory from Shannon’s
original random codes to modern codes
Chapter 6
All Is Not Lost !
Sparsity
Convex
optimization
Dimensionality
reduction
12-Step Program
To End Compressive Nonsensing
1. Don’t give in to the hype surrounding CS
2. Resist the urge to blindly apply L1 minimization
3. Face up to robustness issues
4. Deal with measurement quantization
5. Develop more realistic signal models
6. Develop practical sensing matrices beyond random
7. Develop more efficient recovery algorithms
8. Develop rigorous performance guarantees for practical CS
systems
9. Exploit signals directly in the compressive domain
10. Don’t give in to the hype surrounding CS
11. Resist the urge to blindly apply L1 minimization
12. Don’t give in to the hype surrounding CS

similar documents