### slides

```A Faster Algorithm for Linear Programming
and the Maximum Flow Problem I
Yin Tat Lee
(MIT, Simons)
Joint work with Aaron Sidford
THE PROBLEM
Linear Programming
Consider the linear program (LP)
min
≥
where  is a  ×  matrix.
•  is the number of constraints.
•  is the number of variables.
= 2,  = 6
= 2,  = ∞
= # of constraints
= # of variables
Previous Results
• All of them are iterative methods.
• Start with some initial point .
• While  is not optimal
– Improve
• Time = (#) × (  )
• This talk focus on #iter.
We call it efficient if
• Polynomial time
• Doesn’t use LP solver
= # of constraints Previous
= # of variables
(log( −1 )) is omitted)
Results (Selected)
Year
Author
# of iter
Cost per iter
Efficient steps
1947
Dantzig
2
Pivot
Yes
1979
Khachiyan
2
Update the Ellipsoid
Yes
1984
Karmarkar

Solve linear systems
Yes
1986
Renegar
Solve linear systems
Yes
1989
Vaidya
Matrix inverse
Yes
1994
Nesterov,
Nemirovskii
Compute volume
No
( )
Solve Linear Systems
Yes
( )
Solve Linear Systems
Yes
2013
Lee, Sidford

1/4

Remark: In 2013, Mądry shows how to obtain 3/7 iters for certain LPs!
Outline
= # of constraints
= # of variables
(log( −1 )) is omitted)
Year
Author
# of iter
Cost per iter
Efficient steps
1947
Dantzig
2
Pivot
Yes
1979
Khachiyan
2
Update the Ellipsoid
Yes
1984
Karmarkar

Solve linear systems
Yes
1986
Renegar
Solve linear systems
Yes
1989
Vaidya
Matrix inverse
Yes
1994
Nesterov,
Nemirovskii
Compute volume
No
( )
Solve Linear Systems
Yes
( )
Solve Linear Systems
Yes
2013
Lee, Sidford

1/4

Remark: In 2013, Mądry shows how to obtain 3/7 iters for certain LPs!
LP AND CENTER
A general framework
We can solve linear program by maintaining center.
Somehow, get a “center” first
Put the cost constraint there
and move it.
Say we can move
portion closer each time
After ( −1 ) steps, we are done.
A general framework
We can solve linear program by maintaining center.
Somehow, get a “center” first
Put the cost constraint there
and move it.
Why center?
After (
Say we can move
portion closer each time
−1
) steps, we are done.
What if we don’t try to maintain a center?
• It is just like simplex method.
It is good now.
Oh, it touches. What to do?
Still good.
…..
What if we don’t try to maintain a center?
• It is just like simplex method.
It is good now.
Still good.
Avoid bad decision
by using global
…..
Oh, it touches. What to do?
information!
A general framework
Formally, we have (say  = 0):
•  = 22014 . Find the center of    ≤ ,  ≥
• While  is large
–  ← (1 − ) for some fixed  > 0
– Update the center of    ≤ ,  ≥
This is called interior point method.
The initial point is easy:
min
+≥ ,≥0
22014  +
A general way to define a center
Let  be a smooth convex function on Ω such that
•   → +∞ as  → Ω.
For example,
Standard log barrier: ps x = −
ln   −  = −
ln( )
Center = argmin
Barrier
Function
QUALITY OF A CENTER
Rounding
• Assume center is induced by some barrier function .
• Look at the ellipsoid  induced by  at the center .
• Call  is  rounding if  ⊂ Ω ⊂  for some .
Self concordant barrier
•  is a -self concordant barrier function for Ω if
–  is smooth.
–  gives  rounding.
is not smooth enough
Bad rounding.
Rounding Algorithm
For general barrier function :
• Repeat
– Tighten the cost constraint
– Maintain the rounding ellipsoid induced by .
Why  iterations?
Why  iterations?
Think   = − ln  .
• Newton Method (Using smoothness)
Given   − min  () < 0.5, we can find the center in (1)
steps.
x
Why  iterations?
Let  be the old center. Using the smoothness, we have
−    1
− min   ≤
≤ .

x
−
2
Why  iterations?
So, we need
−    1
≤ .

−
2
It takes   iters.
Why
iterations?
• We can reduce the gap by 1/ .
Roughly Speaking:
Smoothness +  rounding gives
log( −1 ) iterations LP solvers.
Quality of analytic center is arbitrary bad in !
• Recall the standard log barrier function

= −

ln   −  = −
=1
ln  .
=1
• The center  = argmin   is called analytic center.
Is it tight?
• In practice, it takes ≤ 60 steps.
• Mizuno, Todd, Ye showed it is “usually” correct on first step.
• In 2014, Mut and Terlaky showed an example really takes
Ω  log  −1 iterations where  is exponential in .
UNIVERSAL BARRIER FUNCTION
Universal Barrier Function
Theorem [NN94]: For any convex set Ω ∈  ,
= −log ( Ω −   )
is a ()-self concordant barrier function.
“Smaller” set has larger polar. Hence,  → ∞ as  → Ω
Note that  2  ~    Ω −   .
Kannan-Lovasz-Simonovits Lemma: For any convex set Ω, the
second moment matrix

(Ω) =
gives a () rounding of Ω.
Ω
The cost of Universal Barrier
• To get second moment matrix, you need  1 sampling.
• To get 1 sampling, you need to do (1) iters of Markov chain.
• To do 1 iter of Markov chain, you need to implement
separation oracle for Ω −   .
• If Ω = { ≥ }, one need to solve an LP.
Hence, one iteration requires solving (1) many LPs.
The problem:
Get an efficient () self concordant barrier
function.
VOLUMETRIC BARRIER
FUNCTION
Volumetric Barrier Function
In 1989, Vaidya showed
1

2
= log det   () +  ()
2

where   = −  ln  . Why it is volumetric?
For example:
It is a
1/2
barrier.
Volumetric Barrier
Log Barrier
Why Volumetric is good?
1

2
= log det   () +  ()
2

Around , we have
−1  +
~ −

where
=

log ()

−1  .

1 0
2 + 32
1
0 .  = 1 ,  = 9 ,  = 1.

Example:  = 3 0 . Then,   =
1
10 2
10 3
0
22
0 2
In general,  = , 0 ≤  ≤ 1, if the  ℎ row is repeated,  is decreased by 2.
For [0,1] interval with 0 repeated  times:
1
−1
= 1
⋮
1/3
−1  =
1
⋮
OUR BARRIER FUNCTION
Repeated Volumetric Barrier Function
1

2
= log det   () +  ()
2

1
()
2
()
log det   () +  ()?
2

(1)
How about (+1)  =
Suppose ()  = −
(+1)  ~ −
So, we have
(+1)

()

log , around , we have

−1
()
+
log  .

What
is that?
=
We call (∞)  = −

∞
∞
()
() −1  +  .
() log  where
() =
∞
(∞) −1  .
satisfies
What is that weight?
1
2
• Let  =  ( )/ .

∞
() =
(∞) −1  .
If  ≤ 1 for all ,
the ellipsoid is inside.
(∞)
The
represents John
ellipsoid of { −1  ∞ ≤ 1}
Our Condition
(John Ellipsoid):
= 1 if  ≠ 0.
Repeated Volumetric Barrier Function
• Recall
(∞)  ~ ln det  2
We get
∞

~ − ln det −1  (∞) −1
(∞) ~ − ln vol ℎ Ω ∩ (2x − Ω) .
Symmetrize
Find John Ellipsoid
The barrier function is not perfect!
• The path is piecewise smooth because it may not touch every
constraint.
• (∞) =
max
=,≥0
lndet(  −1  −1 )
Our Barrier Function
• Standard Log Barrier:
= −
• Volumetric Barrier:
1

2
= log det   () +  ()
2

• John Ellipsoid Barrier:
∞ () = max lndet(  −1  −1 )
=,≥0
• Regularized John Ellipsoid Barrier (1):
−1 ()
1−log
−1 )
max lndet(  −1
≥0

+
−

• Regularized John Ellipsoid Barrier (2):

−1
−1
max ln det     −
ln −
ln
≥0

Lewis Weight
We call
is
−1
Lewis weight for  if
max lndet(
≥0

1−log−1 ( ) −1
) .

1 1
−
2
(
)
=
Thanks to Cohen and Peng, we know
• Let  be

max 1, 2
(
) rows sample of  accordingly to  ,
~   ∀.
2
1−
•  =    is the maximizer of
− det      ≤1

≤1
i.e, the maximum ellipsoid such that it “insides” the polytopes.
• For  = ∞, {   ≤ 1} is the John ellipsoid for {|| ≤ 1}.
Computing  Lewis Weight
• Cohen and Peng showed how to compute it when  < 4.
• The repeated
(1)
volumetric barrier:  = /,
(+1)

=   ()  +  .
(log())
After renormalization,
gives “∞ “Lewis” weight:
1
~ ( 2 ).

• Cohen, L., Peng, Sidford shows that in fact a similar algorithm
find constant “approximate”  Lewis weight for  > 2 in (1).
CONCLUSION
= # of constraints
= # of variables
Our Barrier
Given any polytope {Ax ≥ }, let

=
−
ln −
ln .

Theorem: The barrier function  gives (  log  −1 ) iterations
algorithm for LP of the form
min    .
max ln det   −1  −1
≥0
≥
Algorithm:
• While
– Move the cost constraint
– Maintain the regularized John ellipsoid
= # of constraints
= # of variables
However…
• My goal is
to design general LP algo fast enough to
beat the best maxflow algorithm!
• We obtained
min
≥
Compute

log  −1
−1
```