Chaotic Iteration in Abstract Interpretation: How to compute the fixpoint?

In Abstract Interpretation Recipe, note that if the set of program points is $p'_1,\ldots,p'_n$, then we are solving the system of eqautions in $n$ variables $g_1,\ldots,g_n$

    g_1 = H_1(g_1,\ldots,g_n) \\
    \ldots \\
    g_i = H_i(g_1,\ldots,g_n) \\
    \ldots \\
    g_n = H_n(g_1,\ldots,g_n) \\


   H_i = g_i \sqcup \bigsqcup_{(p_j,p_i) \in E} sp^{\#}(g_j,r(p_j,p_i))

The approach given in Abstract Interpretation Recipe computes in iteration $k$ values $g^k_i$ by applying all equations in parallel to previous values:

    g^{k+1}_1 = H_1(g^k_1,\ldots,g^k_n) \\
    \ldots \\
    g^{k+1}_i = H_i(g^k_1,\ldots,g^k_n) & \mbox{\bf parallel iteration} \\
    \ldots \\
    g^{k+1}_n = H_n(g^k_1,\ldots,g^k_n) \\

What happens if we update values one-by-one? Say in one iteration we update $i$-th value, keeping the rest same:

    g^{k+1}_i = H_i(g^k_1,\ldots,g^k_n)  \\
 & \mbox{\bf chaotic iteration} \\
    g^{k+1}_j = g^k_j, \mbox{ for } j \neq i 

here we require that the new value $H_i(g^k_1,\ldots,g^k_n)$ differs from the old one $g^k_i$. An iteration where at each step we select some equation $i$ (arbitrarily) is called chaotic iteration. It is abstract representation of different iteration strategies.


  1. What is the cost of doing one chaotic versus one parallel iteration?
  2. Does chaotic iteration converge if parallel converges?
  3. If it converges, will it converge to same value?
  4. If it converges, how many steps will convergence take?
  5. What is a good way of choosing index $i$ (iteration strategy), example: take some permutation of equations

$I,L_1,L_2,\ldots,L_n,\ldots,$ be vectors of values $(g^k_1,\ldots,g^k_n)$ in parallel iteration and

$I,C_1,C_2,\ldots,C_n,\ldots,$ be vectors of values $(g^k_1,\ldots,g^k_n)$ in chaotic iteration

(starting from the same initial lattice value $I$)

Compare values $I$, $L_1$, $C_1$, $I_n$, $C_n$ in the lattice

  • in general
  • when selecting equations by fixed permutation

Worklist Algorithm and Iteration Strategies

Observation: in practice $H_i(g_1,\ldots,g_n)$ depends only on small number of $g_j$, namely predecessors of node $p_i$

Consequence: if we chose $i$, next time it suffices to look at successors of $i$ (saves traversing CFG)

This leads to a worklist algorithm:

  • initialize lattice, put all equations in worklist
  • choose $i$, find new $g_i$, remove $i$ from worklist
  • if $g_i$ has changed, update it and add to worklist $j$ for $p_j$ successor of $p_i$

Algorithm terminates when worklist is empty (no more changes)

Useful iteration strategy: reverse postorder and strongly connected components

Reverse postorder: follow changes through successors in the graph

Strongly connected component (SCC) of a directed graph: path between each two nodes of component.

  • compute until fixpoint within each SCC

If we generate control-flow graph from our simple language, what do strongly connected components correspond to?


  • Principles of Program Analysis, Chapter 6