### slides - SAS 2011

```Interpolation and Widening
Ken McMillan
Microsoft Research
Interpolation and Widening
• Widening/Narrowing and Craig Interpolation are two approaches to
computing inductive invariants of transition systems.
• Both are essentially methods of generalizing from proofs about bounded
executions to proofs about unbounded executions.
• In this talk, we'll consider the relationship between these two
approaches, from both theoretical and practical points of view.
• Consider only property proving applications, since interpolation only
applies with a property to prove.
Intuitive comparison
weaker

Δ
lfp
...
⊥
1
⊥
2
stronger
widening/narrowing
iterations

inductive
weaker
lfp
...
⊥
1
⊥
2
stronger
interpolation
iterations
Abstractions as proof systems
• We will view both widening/narrowing and interpolation as proof systems
– In particular, local proof systems
• A proof system (or abstraction) consists of:
– A logical language L (abstract domain)
– A set of sound deduction rules
• A choice of proof system constitutes a bias, or domain knowledge
– Rich proof system = weak bias
– Impoverished proof system = strong bias
By restricting the logical language and deduction rules, the analysis
designer expresses a space of possible proofs in which the analysis
tool should search.
Fundamental problems
• Relevance
– We must avoid a combinatorial explosion of deductions
– Thus, deduction must be restricted to facts relevant to the property
• Convergence
– Eventually the proofs for bounded executions must generalize to a proof of
unbounded executions.
Different approaches
We will see that the two methods have many aspects in common, but
take different approaches to these fundamental problems.
• Widening/narrowing relies on a restricted proof system
– Relevance is enforced by strong bias
– Convergence is also enforced in this way, but proof of a property is not
guaranteed
• Interpolation uses a rich proof system
– Relevance is determined by Occam's razor
• relevant deductions occur in simple property proofs
– Convergence is not guaranteed, but
• approached heuristically again using Occam's razor
In the interpolation approach, we rely on well-developed theorem
proving approaches to search large spaces for simple proofs.
Proofs
• A proof is a series of deductions, from premises to conclusions
• Each deduction is an instance of an inference rule
• Usually, we represent a proof as a tree...
P1
Premises
P2
P3 P4
P5
P1 P2
C
Conclusion
C
Inference rules
• The inference rules depend on the theory we are reasoning in
Boolean logic
Linear arithmetic
Resolution rule:
Sum rule:
p_
:p_D
_D
x1 · y1
x2 · y2
x1+x2 · y1+y2
Invariants from unwindings
• A simple way to generalize from bounded to unbounded proofs:
– Consider just one program execution path, as straight-line program
– Construct a proof for this straight-line program
– See if this proof contains an inductive invariant proving the property
• Example program:
x = y = 0;
while(*)
x++; y++;
while(x != 0)
x--; y--;
assert (y == 0);
invariant:
{x == y}
Unwind the loops
{True}
x = y = 0;
x++; y++;
= y0}= 0}
{x = {y
0^
{x
{y = y}
1}
x++; y++;
{x
{y = y}
2}
Proof of inline program
contains invariants
for both loops
[x!=0];
x--; y--;
{x
{y = y}
1}
[x!=0];
x--; y--;
[x == 0]
[y != 0]
= 0}
{x = {y
0)
y = 0} •
{False}
Assertions may diverge as we unwind
• A practical method must somehow
prevent this kind of divergence!
How can we find relevant proofs of program paths?
Interpolation Lemma
[Craig,57]
• Let A and B be first order formulas, using
– some non-logical symbols (predicates, functions, constants)
– the logical symbols ^, _, :, 9, 8, (), ...
• If A  B = false, there exists an interpolant A' for (A,B) such that:
A A'
A' ^ B = false
A’ uses only common vocabulary of A and B
A
B
pq
q  r
A’ = q
Interpolants as Floyd-Hoare proofs
True
{True}
1. Each formula implies the next
)
xxx=y;
= yy0
1=
x1{x=y}
=y0
)
y1y++;
y++
=y0+1
2. Each is over common symbols of
prefix and suffix
y1>x
{y>x}
1
3. Begins with true, ends with false
)
x==
[x[x=y]
1=yy]
1
False
{False}
Proving in-line programs
SSA
sequence
proof
Hoare
Proof
Prover
Interpolation
Local proofs and interpolants
TRUE
xx=y;
1=y0
x1 · y0
y0 · x1
x1 · y
y++;
y1=y
0+1
y0+1 · y1
x1+1 · y1
·xx]1
[yy1·
y1 · y0+1
y1 · x1+1
x1+1 · y1
1·0
FALSE
FALSE
This is an example of a local proof...
Definition of local proof
{x1,y0}
y0
x1
{x1,y0,y1}
y1
{x1,y1}
x1=y0
x1 · y0
y1=y0+1
y0+1 · y1
deduction “in scope” here
x1+1 · y1
y1·x1
Local proof:
Every deduction written
in vocabulary of some
frame.
vocabulary
scope of variable
of frame
= range
= set of
of variables
frames it occurs
“in scope”
in
Forward local proof
TRUE
{x1,x0}
x1=y0
x1 · y0
x1 · y
y1=y0+1
{x1,y0,y1}
y0+1 · y1
x1+1 · y1
x1+1 · y1
{x1,y1}
y1·x1
1·0
FALSE
FALSE
For a forward
Forward
local local
proof:proof,
each the
deduction
(conjunction
can beof)assigned
assertions
a frame
such thatframe
crossing
all theboundary
deductionisarrows
an interpolant.
go forward.
Proofs and relevance
• By dropping unneeded inferences, we weaken the interpolant and
eliminate irrelevant predicates.
TRUE
{x1,y0}
x1=y0+1
z1 = y0 +2
z1=x1+1
{x1,y0,z1}
{x1,y0,z1}
x1= y0 + 1
x1x=1=y0y+
0 +1 1Æ z1 = y0 +2
x1·y0
y0 · z1
0·2
1·0
FALSE
FALSE
Interpolants are neither weakest pre not strongest post.
Applying Occam's Razor
Simple proofs are more likely to generalize
• Define a (local) proof system
– Can contain whatever proof rules you want
• Define a cost metric for proofs
– For example, number of distinct predicates after dropping subscripts
• Exhaustive search for lowest cost proof
– May restrict to forward or reverse proofs


x=e
[e/x]
 unsat.
FALSE
Allow simple arithmetic rewriting.
Even this trivial proofs system allows useful flexibility
Loop example
x0 = 0
y0 = 0
x1=x0+1
y1=y0+1
x1x=
1
1 = y0+1
y1 = 1
x2 x=2 2= y1+1
y2 = 2
x2 = y2
...
cost: 2
TRUE
TRUE
x0= 0Æ y0 = 0
x0 = y0
x1=1 Æ y1 = 1
x1= y1
x2=2 Æ y2 = 2
...
x2= y2
...
x0 = y0
x1 = y1
x2=x1+1
y2=y1+1
cost: 2N
...
...
Lowest cost proof is simpler, avoids divergence.
Interpolation
• Generalize from bounded proofs to unbounded proofs
• Weak bias
– Rich proof system (large space of proofs)
– Apply Occam's razor (simple proofs more likely to generalize)
• Occam's razor is applied to
– Avoid combinatorial explosion of deductions (relevance)
– Eventually generalize to inductive proofs (convergence)
• Apply theorem proving technology to search large space of possible
proofs for simple proofs
– DPLL, SMT solvers, etc.
Widening operators
• A widening operator is a function :  ×  →  over partially ordered ,
satisfying the following properties:
– Soundness: y ⊑
– Expansion: x ⊑
– Stability: for every ascending chain...
0
=
0
1
⊑

2
⊑
⊑
⋯

1
2
⋯
this chain eventually stabilizes.
Upward iteration sequence
3
eventually stable!
⊑
• Suppose we have a monotone transformer  ∶  →  representing (or
approximating) our concrete semantics.
• We use apply the widening operator to successive iterations of  to
obtain an upward iteration sequence.
over-approximate

2

1

⊥

⊑

1

2

3
• The widening properties guarantee
– over-approximation
– stabilization
Narrowing similar but contracting
Widening as local deduction
• Since widening loses information, we can think of it as a deduction rule
• In fact, we may have several deduction rules at our disposal:
abstract post
join
widen
pre
pre
pre
⊔ ()
()
Sound if  is an
over-approximation
and ⊔ is sound
Sound if  is an
over-approximation
and  is sound
()
Sound if  is an
over-approximation
Note we don't need the expansion and stability properties
of  to have a sound deduction rule.
Widening with octagons
{True}
x = y = 0;
{x=y, x,y ∈ [0,0]}
x++; y++;
{x=y, x,y ∈ [1,1]}
x++; y++;
{x=y, x,y ∈ [1,2]}
⊔
{x=y, x,y ∈ [0,1]}

{x=y, x,y ∈ [0, ∞]}
[x!=0];
x--; y--;
{x=y, x,y ∈ [0, ∞]}
[x!=0];
x--; y--;
But note the irrelevant fact!
{x=y, x,y ∈ [0, ∞]}
[x == 0]
[y != 0]
Our proof rules are too coarse
to eliminate this fact.
{False}
Because we proved the property, we have computed an interpolant
Over-widening (with intervals)
{True}
x = y = 0;
{x,y ∈ [0,0]}
x=1-x; y++;
{x,y ∈ [1,1]}
x=1-x; y++;

{x,y ∈ [0, ∞]}
{x∈ {−∞, ∞},y ∈ [0, ∞]}
[x==2];
{False}
Note if we had waited on step to
widen we would have a proof.
Safe widening
• Let us define a safe widening sequence as one that ends in a safe state.
Suppose we apply a sequence of rules and fail...
⊥
⊔
⊔
⊔

⊔
⊔
⊔
⊑
We may postpone a widening to achieve a safety proof
⊥
⊔
⊔
⊔
⊔

⊔
⊔
⊑
• This is a proof search problem!
– If we have  rules and  steps, there are   possible proofs
• Safe widening sequences may not stabilize
– May not contain a long enough sequence of  ′ s.
• Safe widening sequences may not exist
– That is, our proof system may be incomplete
Incompleteness
Widening/narrowing
• Incomplete proof system on purpose
• We restrict the proof system (strong bias) to enforce
– relevance focus
– convergence
• These properties are obtained at the risk of over-widening
Interpolation
• Incompleteness derives only from incompleteness of underlying logic
– For example, in Presburger arithmetic we have completeness
• Relevance focus and convergence rely on general heuristics
– Occam's razor (simple proofs tend to generalize)
– Rely on theorem proving techniques
– Choice of logic and axioms also represents a weak bias
Consequences of strong bias
• Widening requires domain knowledge, which entails a careful choice of
the logical language L.
– Octagons: easy
– Unions of octagons: harder
– Presburger arithmetic formulas: ???
• This entails incompleteness, as a restricted language implies loss of
information.
• This also means we can tailor the representation for efficiency.
– Octagons: use half-space representation, not convex hull of vertices
– Polyhedra: mixed representation
Weak bias can be used in cases where domain knowledge is lacking.
• Boolean logic (e.g., hardware verification)
– Language L is Boolean circuits over system state variables
– There is no obvious a priori widening for this language
– Interpolation techniques are the most effective known for this problem
• McMillan CAV 2003 (feasible interpolation using SAT solvers)
• Bradley VMCAI 2011 (interpolation by local proof)
– Note rapid convergence is very important here
• Infinite state cases requiring disjunctions
– Hard to formula a widening a priori
– Weak bias can be used to avoid combinatorial explosion of disjuncts
• Example: IMPACT
• Scaling to large number of variables
– Weak bias can allow focus just on relevant variables
Simple example
for(i = 0; i < N; i++)
a[i] = i;
for(j = 0; j < N; j++)
assert a[j] = j;
invariant:
{8 x. 0 · x ^ x < i ) a[x] = x}
Partial Axiomatization
• Axioms of the theory of arrays (with select and update)
8 (A, I, V) (select(update(A,I,V), I) = V
8 (A,I,J,V) (I  J ! select(update(A,I,V), J) = select(A,J))
• Axioms for arithmetic
8 (X) X ≤ X
8 (X,Y,Z) (X ≤ Y ∧ Y ≤ Z ! X ≤ Z)
8 (X,Y) (Y ≤ X ∨ Y ≥ succ(X))
etc...
[ integer axiom]
We use a (local) first-order superposition prover to generate
interpolants, with a simple metric for proof complexity.
Unwinding simple example
• Unwind the loops twice
i0 = 0
i = 0;
i0 < N
[i < N];
a1 = update(a0,i0,i0)
a[i] = i; i++;
i1 = i0 + 1
[i < N];
i1 < N
a[i] = i; i++;
a2 = update(a1,i1,i1)
i2 = i+1 + 1
[i >= N]; j = 0;
i ¸ N ^ j0 = 0
[j < N]; j++;
j0 < N ^ j1 = j0 + 1
[j < N];
a[j] != j;
j1 < N
select(a2,j1)  j1
{i0 = 0}
invariant
{0 · U ^ U < i1 ) select(a1,U)=U}
{0 · U ^ U < i2 ) select(a2,U)=U} invariant
{j · U ^ U < N ) select(a2,U)=U}
{j · U ^ U < N ) select(a2,U) = U}
weak bias prevents constants diverging
as 0, succ(0), succ(succ(0)), ...
With strong bias
• Something like array segmentation functor of C + C + Logozzo
i = 0;
{0,i} ⊤ … | i = 0
[i < N];
a[i] = i; i++;
[i < N];
a[i] = i; i++;
[i >= N]; j = 0;
[j < N]; j++;
[j < N];
a[j] != j;
{0,i-1} ,  {1,i} ⊤ … | i = 1, i ≤
⊔
{0} ,  {i}? ⊤ … | i ∈ 0,1
{0} ,  {i-1}? [, ] {i} ⊤ … | i ∈ [1,2], i≤N
{0} ,  {i}? ⊤ … | i ∈ [0, ∞]

...
note: it so happened here our first try a widening
was safe, but this may not always be so.
Comparison
Widening/narrowing
• Language L, operators ⊔ and  carefully chosen to throw away
information at just the right places
– This represents strong domain knowledge
• Carefully crafted representation yields high performance
Interpolation
• Axioms and proof bias are generic
– Little domain knowledge is represented
• Uses a generic theorem prover to generate local proofs
– No domain specific tuning
• Not as scalable as the strong bias approach
List deletion example
a = create_list();
while(a){
tmp = a->next;
free(a);
a = tmp;
}
• Invariant synthesized with 3 unwindings (after some: simplification):
{rea(next,a,nil) ^
8 x (rea(next,a,x)! x = nil _ alloc(x))}
• No need to craft a new specialized domain for linked lists.
• Weak bias can be used in cases where domain knowledge is lacking.
Are interpolants widenings?
• A safe widening sequence is an interpolant.
• An interpolant is not necessarily a widening sequence, however.
– Does not satisfy the expansion property
– Does not satisfy the eventual stability property as we increase the sequence
length.
• A consequence of giving up stabilization is that inductive invariants
(post-fixed points) are typically found in the middle of the sequence, not
at an eventual stabilization point.
– Early formulas tend to be too strong (influenced by initial condition)
– Late formulas tend to be too weak (influenced by final condition)
Typical interpolant sequence
{True}
x = y = 0;
{x = 0 ∧  = 0}
Too strong
{ =  ∧  ≥ 1}
Weakened, but not expansive
x++; y++;
x++; y++;
{ = }
[x!=0];
x--; y--;
{ = }
Does not stabilize at invariant
[x!=0];
x--; y--;
{x = 0 ⇒  = 0}
[x == 0]
[y != 0]
Too weak
{False}
No matter how far we unwind, we may not get stabilization
Conclusion
• Widening/narrowing and interpolation are methods of generalizing from
bounded to unbounded proofs
• Formally, widening/narrowing satisfies stronger conditions
widening/narrowing
interpolation
soundness
expanding/contracting
stabilizing
soundness
stabilization is not obtained when proving properties, however
Conclusion, cont.
• Heuristically, the difference is weak v. strong bias
strong bias
weak bias
restricted proof system
incompleteness
smaller search space
domain knowledge
efficient representations
rich proof system
completeness
large search space
Occam's razor
generic representations
• Can we combine strong and weak heuristics?
– Fall back on weak heuristics when strong fails
– Use weak heuristics to handle combinatorial complexity
– Build known widenings into theory solvers in SMT?
```