LARA

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
sav08:chaotic_iteration_in_abstract_interpretation [2008/05/20 12:56]
vkuncak
sav08:chaotic_iteration_in_abstract_interpretation [2008/05/20 13:32]
vkuncak
Line 30: Line 30:
 \begin{array}{ll} \begin{array}{ll}
     g^{k+1}_i = H_i(g^k_1,​\ldots,​g^k_n) ​ \\     g^{k+1}_i = H_i(g^k_1,​\ldots,​g^k_n) ​ \\
-    ​g^{k+1}_j = g^k_j, \mbox{ for } j \neq i  & \mbox{\bf chaotic iteration}+ & \mbox{\bf chaotic iteration} \\ 
 +    ​g^{k+1}_j = g^k_j, \mbox{ for } j \neq i 
 \end{array} \end{array}
 \] \]
-here we require that the new value $H_i(g^k_1,​\ldots,​g^k_n)$ differs from the old one $g^k_i$, otherwise we select a different one. +here we require that the new value $H_i(g^k_1,​\ldots,​g^k_n)$ differs from the old one $g^k_i$. ​ An iteration where at each step we select some equation ​$i$ (arbitrarily) ​is called ​//chaotic iteration//.  It is abstract representation of different iteration strategies.
-Then we pick a different ​$i$, as long as the result changes. ​ This is //chaotic iteration//​.+
    
 Questions: Questions:
-  ​What is the cost of doing one chaotic versus one parallel iteration? ++|chaotic is $n$ times cheaper++ +  ​What is the cost of doing one chaotic versus one parallel iteration? ++|chaotic is $n$ times cheaper++ 
-  ​Does chaotic iteration converge if parallel converges?​ +  ​Does chaotic iteration converge if parallel converges?​ 
-  ​If it converges, will it converge to same value? +  ​If it converges, will it converge to same value? 
-  ​If it converges, how many steps will convergence take? +  ​If it converges, how many steps will convergence take? 
-  ​What is a good way of choosing index $i$ (iteration strategy), example: ​round robin (take permutation of all indices)+  ​What is a good way of choosing index $i$ (iteration strategy), example: take some permutation of equations
  
 $I,​L_1,​L_2,​\ldots,​L_n,​\ldots,​$ be vectors of values $(g^k_1,​\ldots,​g^k_n)$ in parallel iteration and  $I,​L_1,​L_2,​\ldots,​L_n,​\ldots,​$ be vectors of values $(g^k_1,​\ldots,​g^k_n)$ in parallel iteration and 
Line 50: Line 50:
  
 Compare values $I$, $L_1$, $C_1$, $I_n$, $C_n$ in the lattice Compare values $I$, $L_1$, $C_1$, $I_n$, $C_n$ in the lattice
-  * in general +  * in general ​ ​++|$C_1 \sqsubseteq L_1$, generally $C_i \sqsubseteq L_i$ ++ 
-  * for round robin+  * when selecting equations by fixed permutation ++| $L_1 \sqsubseteq C_n$, generally $L_i \sqsubseteq C_{ni}$ ++ 
 + 
 +==== Worklist Algorithm and Iteration Strategies ==== 
 + 
 +Observation:​ in practice $H_i(g_1,​\ldots,​g_n)$ depends only on small number of $g_j$, namely predecessors of node $p_i$ 
 + 
 +Consequence:​ if we chose $i$, next time it suffices to look at successors of $i$ (saves traversing CFG) 
 + 
 +This leads to a worklist algorithm:​ 
 +  * initialize lattice, put all equations in worklist 
 +  * choose $i$, find new $g_i$, remove $i$ from worklist 
 +  * if $g_i$ has changed, update it and add to worklist $j$ for $p_j$ successor of $p_i$ 
 + 
 +Algorithm terminates when worklist is empty (no more changes) 
 + 
 +Useful iteration strategy: reverse postorder and strongly connected components 
 + 
 +Reverse postorder: follow changes through successors in the graph 
 + 
 +Strongly connected component (SCC) of a directed graph: path between each two nodes of component. 
 +  * compute until fixpoint within each SCC
  
 ===== References ===== ===== References =====