### CS151 Lecture 3

```CS151
Complexity Theory
Lecture 10
May 2, 2013
Worst-case vs. Average-case
If E contains functions that require size
2Ω(n) circuits, then E contains 2Ω(n) –unapproximable functions.
• Proof:
– main tool: error correcting code
May 2, 2013
2
Error-correcting codes
• Error Correcting Code (ECC):
C:Σk  Σn
• message m  Σk
C(m)
R
– C(m) with some positions corrupted
• if not too many errors, can decode: D(R) = m
• parameters of interest:
– rate: k/n
– distance:
d = minmm’ Δ(C(m), C(m’))
May 2, 2013
3
Example: Reed-Muller
• Parameters: t (dimension), h (degree)
• alphabet Σ = Fq : field with q elements
• message m  Σk
• multivariate polynomial of total degree at
most h:
pm(x) = Σi=0…k-1 miMi
{Mi} are all monomials of degree ≤ h
May 2, 2013
4
Example: Reed-Muller
• Mi is monomial of total degree h
– e.g. x12x2x43
– need # monomials (h+t choose t) > k
• codeword C(m) = (pm(x))x  (Fq)t
• rate = k/qt
• Claim: distance d = (1 - h/q)qt
– proof: Schwartz-Zippel: polynomial of degree
h can have at most h/q fraction of zeros
May 2, 2013
5
Codes and hardness
• Reed-Solomon (RS) and Reed-Muller
(RM) codes are efficiently encodable
• efficient unique decoding?
– yes (classic result)
• efficient list-decoding?
– yes (RS on problem set)
May 2, 2013
6
Codes and Hardness
• Use for worst-case to average case:
truth table of f:{0,1}log k  {0,1}
(worst-case hard)
m: 0 1 1 0 0 0 1 0
truth table of f’:{0,1}log n  {0,1}
(average-case hard)
Enc(m): 0 1 1 0 0 0 1 0 0 0 0 1 0
May 2, 2013
7
Codes and Hardness
• if n = poly(k) then
f  E implies f’  E
• Want to be able to prove:
if f’ is s’-approximable,
then f is computable by a
size s = poly(s’) circuit
May 2, 2013
8
Codes and Hardness
f:{0,1}log k  {0,1}
f ’:{0,1}log n  {0,1}
m: 0 1 1 0 0 0 1 0
Enc(m):
0 1 1 0 0 0 1 0 0 0 0 1 0
R: 0 0 1 0 1 0 1 0 0 0 1 0 0
decoding
procedure
i 2 {0,1}log k
May 2, 2013
D
C
small circuit C
approximating f’
small circuit
that computes
f exactly
f(i)
9
Encoding
• use a (variant of) Reed-Muller code
– q (field size), t (dimension), h (degree)
• encoding procedure:
– message m 2 {0,1}k
– subset S µ Fq of size h
so, need ht ≥ k
– efficient 1-1 function Emb: [k] ! St
– find coeffs of degree h polynomial pm:Fqt ! Fq
for which pm(Emb(i)) = mi for all i (linear algebra)
May 2, 2013
10
Encoding
• encoding procedure (continued):
• = Reed-Muller with field size 2, dim. log q, deg. 1
• distance ½ by Schwartz-Zippel
– final codeword: (Had(pm(x)))x 2 Fqt
• evaluate pm at all points, and encode each
May 2, 2013
11
Encoding
m: 0 1 1 0 0 0 1 0
Fqt
Emb: [k] ! St
St
5 2 7 1 2 9 0 3 6 8 3
pm degree h
polynomial with
pm(Emb(i)) = mi
evaluate at
all x 2 Fqt
encode each symbol
. . . 0 1 0 0 1 0 1 0 . . . with
May 2, 2013
12
Decoding
Enc(m): 0 1 1 0 0 0 1 0 0 0 0 1
R: 0 0 1 0 1 0 1 0 0 0 1 0
• small circuit C computing R, agreement ½ + 
• Decoding step 1
– produce circuit C’ from C
• given x 2 Fqt outputs “guess” for pm(x)
• C’ computes {z : Had(z) has agreement ½ + /2
with x-th block}, outputs random z in this set
May 2, 2013
13
Decoding
• Decoding step 1 (continued):
– for at least /2 of blocks, agreement in block is
at least ½ + /2
– Johnson Bound: when this happens, list size
is S = O(1/2), so probability C’ correct is 1/S
– altogether:
• Prx[C’(x) = pm(x)] ≥ (3)
• C’ makes q queries to C
• C’ runs in time poly(q)
May 2, 2013
14
Decoding
pm: 5 2 7 1 2 9 0 3 6 8 3
R’: 5 9 7 1 6 9 0 3 6 8 1
• small circuit C’ computing R’, agreement ’ = (3)
• Decoding step 2
– produce circuit C’’ from C’
• given x 2 emb(1,2,…,k) outputs pm(x)
• idea: restrict pm to a random curve; apply efficient
R-S list-decoding; fix “good” random choices
May 2, 2013
15
Restricting to a curve
– points x=1, 2, 3, …, r 2 Fqt specify a
degree r curve L : Fq ! Fqt
• w1, w2, …, wr are distinct
elements of Fq
• for each i, Li :Fq ! Fq
is the degree r poly for which
Li(wj) = (j)i for all j
• Write pm(L(z)) to mean
pm(L1(z), L2(z), …, Lt(z))
2
x=1
r
3
degree r¢h¢t univariate poly
• pm(L(w1)) = pm(x)
May 2, 2013
16
Restricting to a curve
• Example:
– pm(x1, x2) = x12x22 + x2
– w1 = 1, w2 = 0
1 = (2,1)
2 = (1,0)
Fqt
– L1(z) = 2z + 1(1-z) = z + 1
– L2(z) = 1z + 0(1-z) = z
– pm(L(z)) = (z+1)2z2 + z = z4 + 2z3 + z2 + z
May 2, 2013
17
Decoding
pm: 5 2 7 1 2 9 0 3 6 8 3
R’: 5 9 7 1 6 9 0 3 6 8 1
• small circuit C’ computing R’, agreement ’ = (3)
• Decoding step 2 (continued):
– pick random w1, w2, …, wr; 2, 3, …, r to determine
curve L
– points on L are (r-1)-wise independent
– random variable: Agr = |{z : C’(L(z)) = pm(L(z))}|
– E[Agr] = ’q and Pr[Agr < (’q)/2] < O(1/(’q))(r-1)/2
May 2, 2013
18
Decoding
pm: 5 2 7 1 2 9 0 3 6 8 3
R’: 5 9 7 1 6 9 0 3 6 8 1
• small circuit C’ computing R’, agreement ’ = (3)
• Decoding step 2 (continued):
– agr = |{z : C’(L(z)) = pm(L(z))}| is ¸ (’q)/2 with very
high probability
– compute using Reed-Solomon list-decoding:
{q(z) : deg(q) · r¢h¢t, Prz[C’(L(z)) = q(z)] ¸ (’q)/2}
– if agr ¸ (’q)/2 then pm(L(¢)) is in this set!
May 2, 2013
19
Decoding
• Decoding step 2 (continued):
– assuming (’q)/2 > (2r¢h¢t¢q)1/2
– Reed-Solomon list-decoding step:
• running time = poly(q)
• list size S · 4/’
– probability list fails to contain pm(L(¢)) is
O(1/(q))(r-1)/2
May 2, 2013
20
Decoding
• Decoding step 2 (continued):
– Tricky:
• functions in list are determined by the set L(¢),
independent of parameterization of the curve
• Regard w2,w3, …, wr as random points on curve L
• for q  pm(L(¢))
Pr[q(wi) = pm(L(wi))] · (rht)/q
Pr[8 i, q(wi) = pm(L(wi))] · [(rht)/q]r-1
Pr[9 q in list s.t. 8 i q(wi) = pm(L(wi))] ·(4/’)[(rht)/q]r-1
May 2, 2013
21
Decoding
• Decoding step 2 (continued):
– with probability ¸ 1 - O(1/(q))(r-1)/2 - (4/)[(rht)/q]r-1
• list contains q* = pm(L(¢))
• q* is the unique q in the list for which
q(wi) = pm(L(wi)) ( =pm(i) ) for i = 2,3,…,r
– circuit C’’:
• hardwire w1, w2, …, wr; 2, 3, …, r so that 8 x 2
emb(1,2,…,k) both events occur
• hardwire pm(i) for i = 2,…r
• on input x, find q*, output q*(w1) ( = pm(x) )
May 2, 2013
22
Decoding
• Putting it all together:
– C approximating f’ used to construct C’
• C’ makes q queries to C
• C’ runs in time poly(q)
– C’ used to construct C’’ computing f exactly
• C’’ makes q queries to C’
• C’’ has r-1 elts of Fqt and 2r-1 elts of Fq hardwired
• C’’ runs in time poly(q)
– C’’ has size poly(q, r, t, size of C)
May 2, 2013
23
Picking parameters
• k truth table size of f, hard for circuits of size s
• q field size, h R-M degree, t R-M dimension
• r degree of curve used in decoding
– ht ¸ k (to accomodate message of length k)
– 6q2 > (rhtq)
(for R-S list-decoding)
– k[O(1/(q))(r-1)/2 + (4/’)[(rht)/q]r-1] < 1
(so there is a “good” fixing of random bits)
– Pick: h = s, t = (log k)/(log s)
– Pick: r = (log k), q = (rht-6)
May 2, 2013
24
Picking parameters
•
•
•
•
•
k truth table size of f, hard for circuits of size s
q field size, h R-M degree, t R-M dimension
r degree of curve used in decoding
h = s, t = (log k)/(log s)
-1 < s
log
k,

r = (log k), q = (rht-6)
Claim: truth table of f’ computable in time poly(k)
(so f’ 2 E if f 2 E).
– poly(qt) for R-M encoding
– q · poly(s), so qt · poly(s)t = poly(h)t = poly(k)
May 2, 2013
25
Picking parameters
•
•
•
•
•
k truth table size of f, hard for circuits of size s
q field size, h R-M degree, t R-M dimension
r degree of curve used in decoding
h = s, t = (log k)/(log s)
-1 < s
log
k,

r = (log k), q = (rht-6)
Claim: f’ s’-approximable by C implies f
computable exactly in size s by C’’, for s’ = s(1)
– C has size s’ and agreement =1/s’ with f’
– C’’ has size poly(q, r, t, size of C) = s
May 2, 2013
26
Putting it all together
Theorem 1 (IW, STV): If E contains functions
that require size 2Ω(n) circuits, then E
contains 2Ω(n) -unapproximable functions.
(proof on next slide)
Theorem (NW): if E contains 2Ω(n)-unapproximable functions then BPP = P.
Theorem (IW): E requires exponential size
circuits  BPP = P.
May 2, 2013
27
Putting it all together
• Proof of Theorem 1:
– let f = {fn} be hard for size s(n) = 2δn circuits
– define f’ = {fn’} to be just-described encoding
of (the truth tables of) f = {fn}
– two claims we just showed:
• f’ is in E since f is.
• if f ’ is s’(n) = 2δ’n-approximable, then f is
computable exactly by size s(n) = 2δn circuits.
May 2, 2013
28
Extractors
• PRGs: can remove randomness from
algorithms
– based on unproven assumption
– polynomial slow-down
– not applicable in other settings
• Question: can we use “real” randomness?
– physical source
– imperfect – biased, correlated
May 2, 2013
29
Extractors
• “Hardware” side
– what physical source?
• “Software” side
– what is the minimum we need from the
physical source?
May 2, 2013
30
Extractors
• imperfect sources:
– “stuck bits”:
– “correlation”:
111111
““
““
““
– “more insidious correlation”: perfect squares
• there are specific ways to get
independent unbiased random bits from
specific imperfect physical sources
May 2, 2013
31
Extractors
• want to assume we don’t know details of
physical source
• general model capturing all of these?
– yes: “min-entropy”
• universal procedure for all imperfect
sources?
– yes: “extractors”
May 2, 2013
32
Min-entropy
• General model of physical source w/ k < n
bits of hidden randomness
string sampled uniformly
from this set
2k strings
{0,1}n
Definition: random variable X on {0,1}n has
min-entropy minx –log(Pr[X = x])
– min-entropy k implies no string has weight
more than 2-k
May 2, 2013
33
Extractor
• Extractor: universal procedure for
“purifying” imperfect source:
2k strings
source string
{0,1}n
seed
t bits
E
near-uniform
m bits
– E is efficiently computable
– truly random seed as “catalyst”
May 2, 2013
34
Extractor
“(k, ε)-extractor”  for all X with min-entropy k:
– output fools all circuits C:
|Prz[C(z) = 1] - Pry, xX[C(E(x, y)) = 1]| ≤ ε
– distributions E(X, Ut), Um “ε-close” (L1 dist ≤ 2ε)
• Notice similarity to PRGs
– output of PRG fools all efficient tests
– output of extractor fools all tests
May 2, 2013
35
Extractors
• Using extractors
– use output in place of randomness in any application
– alters probability of any outcome by at most ε
• Main motivating application:
– use output in place of randomness in algorithm
– how to get truly random seed?
– enumerate all seeds, take majority
May 2, 2013
36
Extractors
source string
2k strings
{0,1}n
• Goals:
short seed
long output
many k’s
May 2, 2013
seed
t bits
E
near-uniform
m bits
good:
best:
O(log n)
m = kΩ(1)
k = nΩ(1)
log n+O(1)
m = k+t–O(1)
any k = k(n)
37
Extractors
• random function for E achieves best !
– but we need explicit constructions
– many known; often complex + technical
– optimal extractors still open
• Trevisan Extractor:
– insight: use NW generator with source string
in place of hard function
– this works (!!)
– proof slightly different than NW, easier
May 2, 2013
38
```