### 0 0 1 0 1 - Microsoft Research

```Hardness Amplification within NP
against Deterministic Algorithms
Parikshit
Gopalan
U Washington &
MSR-SVC
Venkatesan
Guruswami
U Washington &
IAS
Why Hardness Amplification





Goal: Show there are hard problems in NP.
Lower bounds out of reach.
Cryptography, Derandomization require average
case hardness.
Revised Goal: Relate various kinds of hardness
assumptions.
hardness, amplify.
Hardness Amplification
Generic Amplification Theorem:
If there are problems in class A that are mildly
hard for algorithms in Z, then there are problems
in A that are very hard for Z.
NP, EXP,
PSPACE
P/poly, BPP,
P
PSPACE versus P/poly, BPP

Long line of work:
Theorem: If there are problems in PSPACE that
are worst case hard for P/poly (BPP), then
there are problems that are ½ +  hard for
P/poly(BPP).
NP versus P/poly

O’Donnell.
Theorem: If there are problems in NP that are
1 -  hard for P/poly, then there are problems
that are ½ +  hard.
Starts from average-case assumption.

NP versus BPP

Trevisan’03.
Theorem: If there are problems in NP that are
1 -  hard for BPP, then there are problems
that are ¾ +  hard.
NP versus BPP

Trevisan’05.
Theorem: If there are problems in NP that are
1 -  hard for BPP, then there are problems
that are ½ +  hard.
BureshOppenheim-Kabanets-Santhanam:
alternate proof via monotone codes.
 Optimal up to .

Our results
Amplification against P.
Theorem 1: If there is a problem in NP that is 1 - 
hard for P, then there is a problem which is ¾ + 
hard.
 = 1/(log n)100
Theorem 2: If there is a problem in PSPACE that
is1 -  hard for P, then there is a problem which is
¾ +  hard.
 = 1/n100
Trevisan: 1 -  hardness to 7/8 +  for PSPACE.
Goldreich-Wigderson: Unconditional hardness for
EXP against P.
Outline of This Talk:
1.
2.
3.
Amplification via Decoding.
Deterministic Local Decoding.
Amplification within NP.
Outline of This Talk:
1.
2.
3.
Amplification via Decoding.
Deterministic Local Decoding.
Amplification within NP.
Amplification via Decoding
1
0
1
1
0
0
Encode
f: Mildly
hard
1
0
1
1
0
0
1
0
1
g: Wildly
hard
1
0
0
1
1
0
0
1
1
Approx.
to g
Decode
1
0
1
1
0
0
f
Amplification via Decoding.
Case Study: PSPACE versus BPP.
PSPACE
1
0
1
1
0
0
f: Mildly
hard
Encode
1
0
1
1
0
0
1
0
1
• f’s table has
size 2n.
• g’s table has
2
n
size 2 .
• Encoding in
space n100.
g: Wildly
hard
Amplification via Decoding.
Case Study: PSPACE versus BPP.
BPP
1
0
0
1
1
Decode
0
0
1
1
Approx.
to g
1
0
1
1
0
0
f
• Randomized
local decoder.
• List-decoding
beyond ¼ error.
Amplification via Decoding.
Case Study: NP versus BPP.
1
0
1
1
0
0
NP
f: Mildly
hard
Encode
1 • g is a monotone
0
function M of f.
1
1 • M is computable
0
in NTIME(n100)
0
1 • M needs to be
0
noise-sensitive.
1
g: Wildly
hard
Amplification via Decoding.
Case Study: NP versus BPP.
BPP
• Randomized
1
0
local decoder.
1
0
• Monotone codes
0
1
1
1
Decode
0
0
• Can only
0
0
approximate f.
0
1
1
Approx.
Approx.
to f
to g
Outline of This Talk:
1.
2.
3.
Amplification via Decoding.
Deterministic Local Decoding.
Amplification within NP.
Deterministic Amplification.
Deterministic local decoding?
P
1
0
0
1
1
0
0
1
1
Decode
1
0
1
1
0
0
Deterministic Amplification.
Deterministic local decoding?
2nn100
P
1
0
0
1
1
0
0
1
1
Decode
1
0
1
1
0
0
• Can force an
error on any
bit.
• Need nearlinear length
2n
encoding.
• Monotone
codes for NP.
Deterministic Local Decoding …
… up to unique decoding radius.
 Deterministic local decoding up to 1 -  from
¾ +  agreement.
 Monotone code construction with similar
parameters.
Main tool: ABNNR codes + GMD decoding.
[Guruswami-Indyk, Akavia-Venkatesan]
Open Problem: Go beyond Unique Decoding.
The ABNNR Construction.
Expander graph.
• 2n vertices.
• Degree n100.
The ABNNR Construction.
Expander graph.
1
• 2n vertices.
0
• Degree n100.
0
1
0
The ABNNR Construction.
Expander graph.
1
100
• 2n vertices.
0
101
• Degree n100.
0
000
1
101
0
010
code with small
distance.
• Gives a code of
large distance over
large alphabet.
Concatenated ABNNR Codes.
1
100
101011
0
101
011001
0
000
000000
1
101
011001
0
010
010110
Inner code of
distance ½.
• Binary code
of distance ½.
• [GI]: ¼ error,
not local.
• [T]: 1/8 error,
local.
Decoding ABNNR Codes.
111001
010001
001000
010011
011100
Decoding ABNNR Codes.
100
111001
001
010001
000
001000
001
010011
010
011100
Decode
inner codes.
• Works if
error < ¼.
• Fails if
error > ¼.
Decoding ABNNR Codes.
0
100
111001
0
001
010001
0
000
001000
1
001
010011
0
010
011100
Majority vote
on the LHS.
[Trevisan]:
Corrects 1/8
fraction of
errors.
GMD decoding [Forney’67]
c 2 [0,1]
100
111001
If decoding succeeds, error  2 [0, ¼].
• If 0 error, confidence is 1.
• If ¼ error, confidence is 0.
c = (1 – 4).
Could return wrong answer with high confidence…
… but this requires  close to ½.
GMD Decoding for ABNNR
Codes.
1 0 0 c1
111001
0 0 1 c2
010001
0 0 0 c3
001000
0 0 1 c4
010011
0 1 0 c5
011100
GMD decoding:
Pick threshold,
erase, decode.
Non-local.
Our approach:
Weighted
Majority.
Thm: Corrects
¼ fraction of
errors locally.
GMD Decoding for ABNNR
Codes.
1
1 0 0 c1
Thm: GMD decoding corrects
¼ fraction of error.
0
0 0 1 c2
Proof Sketch:
0
0 0 0 c3
1. Globally, good nodes have
nodes.
1
0 0 1 c4
0
2. Locally, this holds for most
Proof similar toneighborhoods
Expander Mixing
Lemma.
of vertices
0 1 0 c5
on LHS.
Outline of This Talk:
1.
2.
3.
Amplification via Decoding.
Deterministic Local Decoding.
Amplification within NP.
• Finding an inner monotone code [BOKS].
• Implementing GMD decoding.
The BOKS construction.
k
1
0
1
1
0
0
x
1
0
1
1
0
0
1
0
1
T(x)
• T(x) : Sample an r-tuple from
x, apply the Tribes function.
kr
• If x, y are balanced, and
(x,y) > , (T(x),T(y)) ¼ ½.
• If x, y are very close, so are
T(x), T(y).
• Decoding: brute force.
GMD Decoding for Monotone
codes.
concatenated ABNNR.
1
1 0 1 0 c1
0
0 1 1 0 c2
0
1 1 0 0 c3
• Apply GMD decoding.
1
0 1 1 0 c4
Thm: Decoder corrects ¼
fraction of error approximately.
0
1 0 1 0 c5
• Analysis becomes harder.
• Inner decoder returns closest
balanced message.
GMD Decoding for Monotone
codes.
1
1 0 1 0 c1
0
0 1 1 0 c2
0
1 1 0 0 c3
1
0 1 1 0 c4
0
1 0 1 0 c5
• Inner decoder finds the
closest balanced message.
• Assume 0 error: Decoder
need not return message.
• Good nodes have few errors,
Thm: Decoder corrects ¼
fraction of error approximately.
Beyond Unique Decoding…
Deterministic local list-decoder:
1
0
0
1
1
0
0
1
1
Set L of machines such that: