Lecture 4

The structure of this lecture is similar to the previous one:

Today, we extend the approach from last lecture (both generating and proving formulas) to support data structures such as references and arrays.

We use weakest preconditions, although you could also use strongest postconditions or any other variants of the conversion from programs to formulas.

More on wp

wp of conjunction and disjunction

  • wp(Q1 ∧ Q2,s) = wp(Q1,s) ∧ wp(Q2,s) holds for all cases
  • wp(Q1 ∨ Q2,s) = wp(Q1,s) ∨ wp(Q2,s) does not hold when non deterministic commands are used
  • wp(Q1,s) ∨ wp(Q2,s) ⇒ wp(Q1 ∨ Q2,s) holds for all cases

The proof of these statements is part of the homework 1.

wp of havoc

Can we derive wp(Q,havoc(x))?

  • By consider havoc(x) as (… [] x=-1 [] x = 0 [] x = 1 [] …)
wp(Q,havoc(x)) = wp(Q,... [] x=-1 [] x = 0 [] x = 1 [] ...)
               = ∧ wp(Q,x=k), k∈Z
               = ∧ Q[x:=k], k∈Z
               = ∀x. Q
  • From relational definition

Recall that, by relational definition:

havoc(x) = {(s,t) | ∀y "y"≠"x".t("y")=s("y")}

This is the relation that links all states where all variables but x remain unchanged. Intuitively, it makes sense that proving Q holds after visiting the havoc(x) relation is the same than proving Q for all values of x.

wp(Q,havoc(x)) = {(x1,y1) | ∀(x2,y2). ((x1,y1),(x2,y2)) ∈ havoc(x) -> (x2,y2) ∈ Q}
               = {(x1,y1) | ∀(x2,y2). y1 = y2 -> (x2,y2) ∈ Q}
               = {(x1,y1) | ∀x2. Q[y2:=y1]}
               = ∀x. Q

Note that instead of using states s1 and s2, pairs (x1,y1) and (x2,y2) are used. y stands for all unchanged variables.

  • By wp semantics of havoc and assume
wp(Q, havoc(x); assume(x=t)) = wp(wp(Q,assume(x=t)),havoc(x))
                             = wp(x=t => Q,havoc(x))
                             = ∀x. x=t => Q
                             = Q[x:=t]

Desugaring loops with loop invariants

while [inv I] (F) c1

havoc(x1,...,xn);    (those that can be modified in the loop)

Provided that I is inductive. We can check that I is inductive by proving:


in isolation. Or proving


at any point. If we combine these two, we get:

 (assume(~F) []
  assert I;

Benefit: if there is x_{n+1} that is not changed, we do not need to write its properties in the loop invariant. This can make loop invariant shorter.

References about weakest precondition (under construction)

Modeling data structures

Data structure are seen as mathematical function. For instance the following Java statement:

int[100] a;

can be seen as:

a: {0,…,99} → Z

or more generally:

a: Z → Z

With the help of this mathematical function, we can easily define the read access:


is defined as a(i)

and the write access:

a[i] = v

is defined as a[i→v], where a[i→v] is a new function, replacing the previous mapping i→a(i) by i→v

For instance the array:


is written in Java with

int[3] a;
a[0] = 3;
a[1] = 2;
a[2] = 1;

and is represented by the function a[0→3][1→2][2→1]. Therefore, the last value of the array is expressed as a[0→3][1→2][2→1](2) = 1.

Semantics of global arrays

If then else expressions

The value w = a[i→v](j) can be written as a conditional expression. w =

  • v if i=j
  • a(j) if i≠j

We introduce a new syntactical form for such expressions:

w = IF(i=j,v,a(j))

Desugaring if-then-else

IF expression can be written in propositional logic as:

(i=j ∧ w=v) ∨ (i≠j ∧ w=a(j))

or, using implications:

(i=j → w=v) ∧ (i≠j → w=a(j))

Avoiding exponential explosion using flattening

Desugaring if-then-else expressions introduces a disjunction of two conjunctions. If conditional expressions are embedded, the number of conjunctions and disjunctions will explode.

To avoid this explosion of terms, one may introduce additional variables. For instance, the expression:

x < 3(y+5)

can be flattened as

y1 = y + 5
∧ y2 = 3y1
∧ x < y2

This gives a new grammar for atomic formulas:

A := R(v,…,v) | v = f(v,…,v) | v = c

Array bounds checking

To check whether an index is in acceptable bounds, it suffices to place an assertion on the index before accessing the array:

assert(0≤t1 ∧ t1≤K);
a = a [t1→t2];

The value of K is known for global arrays (statically defined). The case of dynamically allocated arrays (like the one in Java) will be dealt in a further section.

Semantics of references

Objects as references, null as an object.

Program with class declaration

class Node {
   Node left, right;

How can we represent fields?

Possible mathematical model: fields as functions from objects to objects.

left : Node -> Node
right : Node -> Node

What is the meaning of assignment?

x.f = y

f[x \mapsto y](z) = \left\{ \begin{array}{lr}
y, & z=x   \\
f(z), & z \neq x 

left, right - uninterpreted functions (can have any value, depending on the program, unlike arithmetic functions such as +,-,* that have fixed interpretation).

References can point to unallocated (thus invalid) objects. We introduce a special value “null” to represent such references. This allows us to write formulas that can check if dereferencing is save:

x.left = y

can be interpreted as

  • assert {x ≠ null)
  • left = left [x → y]

We first check if the object x is not null, then extend the left function with a mapping from x to y.

Modeling dynamic allocation of objects

In our model, we consider that all objects exist already when the program starts. Allocation is modeled with a set 'alloc' which is a subset of the set of all objects ('object')

alloc ⊆ object

Allocating an object on the heap is thus modeled by adding it to the alloc set:

x = new Node()

is translated to

assume(x ∉ alloc);
alloc = alloc ∪ {x}

Dynamically allocated arrays

When we allow dynamically allocated arrays, we introduce a new global function array which maps a pair (arrayID, index) to values.

x[i] = v;

is translated to

assert(0 ≤ j ∧ i ≤ arraysize(x))
array = array( (x, i) -> v)

Proving formulas with uninterpreted functions

Congruence closure algorithm

The congruence closure algorithm can be used to proove the correctness of quantifier free formulas by examining congruence closures of the statements in the formula.

Recall the following properties of the relation equivalence:

  1. x = x (everything is equal to itself) (reflexivity)
  2. x = y → y = x (symmetry)
  3. x = y ∧ y = z → x = z (transitivity)

A congruence is an equivalence relationship with the additional property

  • (x1 = x2 ∧ y1 = y2) → f(x1, y1) = f(x2, y2)
a ≡ b (mod n) is a congruence in respect to addition. Indeed:
a ≡ b (mod n) ∧ c ≡ d (mod n) -> a + c ≡ b + d (mod n)

Equality is a congruence with respect to all function symbols.

Consider the following example:

x = left(y) ∧ y = z
=> left (left (z)) = left (x)

We start by eliminating the implication:

(x = left(y) ∧ y = z)
 ∨ (left (left (z)) ≠ left (x))

We then create a graph with one node for each expression:

  • x
  • y
  • z
  • left(x)
  • left(y)
  • left(z)
  • left(left(z))

We know that x and y are equivalent, we thus unify the nodes and label the new one with (x, y). If x and y are equivalent, so are left(x) and left(y), we can thus unify these nodes as well. We will continue with this until we grouped together all equivalences.

More information on the congruence closure algorithm: