LARA

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
sav08:chaotic_iteration_in_abstract_interpretation [2008/05/20 12:46]
vkuncak
sav08:chaotic_iteration_in_abstract_interpretation [2008/05/20 13:18]
vkuncak
Line 30: Line 30:
 \begin{array}{ll} \begin{array}{ll}
     g^{k+1}_i = H_i(g^k_1,​\ldots,​g^k_n) ​ \\     g^{k+1}_i = H_i(g^k_1,​\ldots,​g^k_n) ​ \\
-    ​g^{k+1}_j = g^k_j, \mbox{ for } j \neq i  & \mbox{\bf chaotic iteration}+ & \mbox{\bf chaotic iteration} \\ 
 +    ​g^{k+1}_j = g^k_j, \mbox{ for } j \neq i 
 \end{array} \end{array}
 \] \]
Line 37: Line 38:
    
 Questions: Questions:
-  ​What is the cost of doing one chaotic versus one parallel iteration?​ +  ​What is the cost of doing one chaotic versus one parallel iteration? ​++|chaotic is $n$ times cheaper++ 
-  ​Does chaotic iteration converge if parallel converges?​ +  ​Does chaotic iteration converge if parallel converges?​ 
-  ​If it converges, will it converge to same value? +  ​If it converges, will it converge to same value? 
-  ​If it converges, how many steps will convergence take?+  ​If it converges, how many steps will convergence take? 
 +  - What is a good way of choosing index $i$ (iteration strategy), example: take some permutation of equations 
 + 
 +$I,​L_1,​L_2,​\ldots,​L_n,​\ldots,​$ be vectors of values $(g^k_1,​\ldots,​g^k_n)$ in parallel iteration and  
 + 
 +$I,​C_1,​C_2,​\ldots,​C_n,​\ldots,​$ be vectors of values $(g^k_1,​\ldots,​g^k_n)$ in chaotic iteration 
 + 
 +(starting from the same initial lattice value $I$) 
 + 
 +Compare values $I$, $L_1$, $C_1$, $I_n$, $C_n$ in the lattice 
 +  * in general 
 +  * when selecting equations by fixed permutation 
 + 
 +==== Worklist Algorithm and Iteration Strategies ==== 
 + 
 +Observation:​ in practice $H_i(g_1,​\ldots,​g_n)$ depends only on small number of $g_j$, namely predecessors of node $p_i$ 
 + 
 +Consequence:​ if we chose $i$, next time it suffices to look at successors of $i$ (saves traversing CFG) 
 + 
 +This leads to a worklist algorithm:​ 
 +  * initialize lattice, put all equations in worklist 
 +  * choose $i$, find new $g_i$, remove $i$ from worklist 
 +  * if $g_i$ has changed, update it and add to worklist $j$ for $p_j$ successor of $p_i$ 
 + 
 +Algorithm terminates when worklist is empty (no more changes) 
 + 
 +Useful iteration strategy: reverse postorder and strongly connected components 
 + 
 +Reverse postorder: follow changes through successors in the graph 
 + 
 +Strongly connected component (SCC) of a directed graph: path between each two nodes of component. 
 +  * compute until fixpoint within each SCC
  
 ===== References ===== ===== References =====