1. Find out how to handle constraints among parameters (actually may be similar to SMT) [May not be tractable]


choose (x => a<=b<=c<=d<=e && x=a or b or .. or e)

should yield

   if(x==c) return c
   else if (x>=d){
     if(x==d) return d
     else if(x>=e){
         if(x==e) return e
         else throw new UnsatisfiableConstraint
     else throw new UnsatisfiableConstraint
  else throw new UnsatisfiableConstraint
else ...

Currently the algorithm will generate linear sized and linear time code(?), whereas the above example generates logarithmic time and linear size code

  • Check if it is possible to use Voronoi Diagrams theory for Cell Decomposition (Sweeping algorithms)

Literature Survey on Integer and Rational Synthesis

Preliminary Definitions

Linear Rational Arithmetic formula

Polyhedron (always assumed convex)


Handling negations and strict inequalities

$\neg a^T x \le b$ is replaced by $ a^T x > b$. Similar results hold for other constructs So, to handle negations, one needs only to handle strict inequalities.

To handle the strict inequality $a^T x < b$, we can add a slack $\delta$ such that $a^T x + \delta = b $ Then we maximize over $\delta$. If the maximum is 0, then we know that strict inequality cannot hold, else we just discard the value. Multiple non-strict inequalities can be handled this way??

Alternatively, we use the notion of Not Necessarily Closed Polyhedra, for which the standard polyhedral operations can be implemented, with some modifications. Refer “Not necessarily closed polyhedra and Double Description method” not_necessarily_closed_polyhedra_and_double_description.pdf

Linear Rational Synthesis with disjunctions

It is assumed that the input formula is a quantifier free formula of the form:

  • $\phi_1 \vee \phi_2$
  • $\phi_1 \wedge \phi_2$
  • $a^T x \le b$
  • $a^T x = b$

Quantifiers, if any, as well as other operations such as $\neg$ or $\ge$ have been elminated during preprocessing

It is clear that every formula containing only linear constraints with conjunctions and disjunctions can be put into this form, if strict inequalities are included.

The elements of matrix A are all known rational values while those of b are unknown rational parameters which will become available at runtime. The aim is to generate code which uses the parameters and efficiently finds one satisfying assignment of values.

We shall refer to the parameters as “input variables” and to the variables whose values are to be found as “output variables”.


Linear Disjunctive programs are powerful enough to represent Integer Linear Programming, Quadratic Programs(FIXME?example) etc Finding a solution to a problem is NP-hard, even without parameters.

With parameters, the problem is decidable, even if all the parameters are unknown. We consider the case where the values in A are known, while those of b are parameters. Even if none of the parameters of A were known, it is possible to generate the code for solving this problem using Gaussian Elimination for equalities and Fourier-Motzkin elimination for inequalities. The problem is solvable and a polynomial sized code (wrt formula size), which executes in polynomial time, can be generated for Gaussian elimination, while doubly-exponential code(wrt formula size), which executes in FIXME time, can be generated in Fourier-Motzkin elimination.

====Handling maximization problems==== A maximization problem

(FIXME: use a better example) Expected kind of input for this program is (check)

choose( (x:Float) => x>=0 && x+y <= a && (y-x >= b || 2*x-y <= 7*c ) )

or, if arbitrary precision is required,

choose( (x:BigDecimal) => x>=0 && x+y <= a && (y-x >= b || 2*x-y <= 7*c ) )

and the expected output is a code giving a value for x satisfying these conditions. Clearly, many solutions are possible. The code will output only one. Here,

x= (a+7c)/3


x= (a-b)/2  

are both valid solutions


choose( (x:Float) => x>=0 && x+y <= a && (y-x >= b || 2*x-y <= 7*c ) )

and the expected output is a code giving a value for x satisfying these conditions. Clearly, many solutions are possible. The code is expected to output only one. Here,

  x= (a-b)/2
else if( a+7c >= 0)
  x= (a+7c)/3
  throw new UnsatisfiableConstraint

is a valid solution

Thus there are 2 parts to the problem:

  1. Identifying when the solution to a system exists in terms of the input variables
  2. Outputting a symbolic representation of the vector of output variables in terms of the input variables
Relation to Quantifier elimination

The solution to this problem can be achieved by quantifier elimination. (e.g. Ferrante-Rackoff elimination or the work described here: Quantifier Elimination for Real Arithmetic) Quantifier elimination is in fact projection, in our setting. However, we can use the geometric structure of the problem to achieve runtime efficiency and better worst-case bounds(???) on the elimination. Further, we wish to not only eliminate quantifiers, but to also give a witness function so that the values of the output function can be recovered. In this case, the elimination of quantifiers while also simultaneously storing the witness functions gives no advantage(?? is worse than) over calculating and minimizing the representation of the formula, and then applying the geometric projection operation on the resulting representation.

Polyhedra and unions of polyhedra

Polyhedra are defined as the set of solutions of the equation $ Ax \le b $ for $x \in \mathbb{R}^d$ We consider 2 ways to represent Polyhedra.

  • H(alfplane)-representation : The polyhedron is represented as $ Ax \le b $, i.e. the halfplanes which define it.
  • V(ertex)-representation: The polyhedron is represented as Vertices, Rays and Lines. i.e. the representation is the Minkowski sum: conv{v1,v2..,vn} + cone {w1, w2, … wn} + P

Where P is $x$ such that $Ax=0$

(FIXME Is P required?)
Both these representations are unique if the Polyhedron is “full-dimensional” The V representation is at worst exponentially larger than the H representation. However, It is easy to intersect 2 H representations, whereas it is easy to find the convex hull of 2 V representations(simply take the union of vertices, rays and lines)

Operations on Polyhedra

Redundancy removal

This can be done by solving a linear program for each constraint to see if it is implied by the others To check if a constraint $c^T x \le d$ is redundant wrt the set $A x \le b$

 max $c^T x$
 s.t. $A x \le b$
 $c^T x \le d $ is redundant iff this maximum is less than d
 To check if a constraint $c^T x = d$ is redundant wrt the set $A x \le b$
 max $c^T x$
 s.t. $A x \le b$
 min $c^T x$
 s.t. $A x \le b$
 $c^T x = d $ is redundant iff both of these are equal to d

The following heuristics can be applied to quickly remove redundancies:

  To find the intersection in H-representation, we simply take the conjunction of the individual H-representations
Convex hull
  To find the convex hull of 2 polyhedra, we do the following
   Consider the polyhedron
   $A_1 z_1 \le b_1$
   $A_2 z_2 \le b_2$
   Now, project out $z_1$ and $z_2$ to obtain the representation $A' z \le b'$ of the convex hull
  However if the polyhedra are given in V-representation, we can simply take the union of their vertice and ray sets to get the convex hull

==Find if a hyperplane intersects the interior of a polyhedron== (determining points of contact is not required) : Solve a linear program

   To check if $c^T x = d $ intersects $Ax \le b$, consider
    max $c^T x $
    s.t. $Ax \le b$
    min $c^T x $
    s.t. $Ax \le b$
   If d lies between these values, it intersects otherwise it does not


   max $c^T x $
   s.t. $Ax \le b$
        $c^T x \le d $
   This maximum equals d iff the hyperplane intersects the polyhedron 

Using the Polyhedral domain

A standard technique, used by many polyhedral manipulation libraries, is to store both H and V representations, i.e. a double description of polyhedra.

For polyhedral manipulations, we may use a standard library such as PPL (Parma Polyhedra Library) PPL A list of available tools is present at Wikipedia

A survey of these representations is “Canonical representations of Polyhedra” by Fukuda et alcanonical_representations_of_polyhedra.pdf. Also, the book “Linear and integer Programming” by Schriver has some results and proofs.


Handling parameters


x,y,z=0 or a

we consider the polyhedron formed by taking all the variables (including the paramteres on the RHS) as a system.i.e.

t= 0 or t-a=0 for t=x,y,z

Thus, we effectively treat this as a problem in a higher dimensional space

Simplification of Formulae

Simplification is tricky because it may be possible to have many representations for the same formula eg

x,y,z= 0 or 1

is equivalent to

x=1,y=z=0 or y=1,x=z=0 or z=1,y=x=0 or x=y=z=0

Thus the first system produces 8 disjunctions while the second equivalent system produces only 4 (For n variables, it would be $2^n$ vs $n+1$) We use the notion of cells to simplify formulae

Redundancy removal is also an important operation. The input given by the user is in the form of an intersection of Halfspaces, which may be redundant.


We partition the region into a union of a set $S$ of convex polyhedra, each of which is called a “cell”.

A cell is defined as any, bounded or unbounded, convex polyhedron.

A cell decomposition is a set of cells which satisfies:

  • The cells are disjoint, except for their faces, i.e. the cells have disjoint interiors
  • It has the same feasible region as $S$

This is in slight contrast to the definition of cell complex found elsewhere in the literature, for example, Fukuda's Polyhedral FAQ, where each face of each cell is also included in the cell complex. We drop this condition.

We will call a cell decomposition “optimal” if no two cells in the decomposition are such that their union is a convex polyhedron. The optimal configuration is unique if the entire region is convex, otherwise many optimal decompositions can exist.

As Cell Decomposition is not unique, it can be achieved in different ways, we explain with respect to the 2 polyhedron case:

  • We can consider all the constraints together and consider the set of generated regions (the size of this set will be exponential in the number of constraints – exact answer is ~ sum C(n+i,i) FIXME). Thus the face of each cell will be one of the constraints of the system.

=Naive Algorithm= Consider all $2^F$ possibilities for regions (i.e. taking each constraint and its negation).
Some will be rules out as unsatisfiable straightaway (Use an LP solver)
For the remaining regions, use a SAT solver to find out if it is implied by the constraints (or convert to DNF and check if it is implied)
Output the those regions which are implied by the formula as the cells.

The following algorithm has probably the same worst case complexity, but practical complexity

Pseudocode for generating cells for 2 polyhedra
Input: $P_1$ and $P_2$ in the form of constraint matrices
Output: A list of Polyhedra as constraint matrices which gives a cell decomposition of $P_1 \cup P_2$
Let $P_2'= \mathrm{List} (P_2)$
Let the constraints of $P_1$ be $a_1,a_2,..a_m$
Repeat for i=1..m
For each $Q_j$ in $P_2'$
if $a_i$ intersects the interior of $Q_j$ remove $Q_j$ from $P_2'$ and add $Q_j \wedge a_i$ and $Q_j \wedge \neg a_i$ to $P_2'$
Return $P_1 :: P_2'$ as the cell decomposition

Here, $\neg$ should be taken to mean the closure of the negation, i.e. $\neg a^T x \le b $ is $a^T x \ge b $
When the algorithm terminates, no constraint $a_i$ of $P_1$ intersects the interior of any polyhedron in $P_2'$. By construction, no 2 polyhedra in $P_2'$ intersect. This operation will run in time exponential in the number of constraints in $P_1$

Taken as formulae, this step is equivalent to : $ (a \wedge b) \vee (c \wedge d) \rightarrow (a \wedge b \wedge c) \vee (a \wedge b \wedge \neg c) \vee (c \wedge d) $ where both the disjuncts $(a \wedge b \wedge c)$ and $(a \wedge b \wedge \neg c)$ are non-empty. This non-emptiness is what is checked by the LP solver

The algorithm to check if a polyhedron intersects solves a linear program, thus using the geometry of $R^d$, which should(?) make this algorithm more powerful than blind manipulation of the constraints.

  • Perhaps more efficiently, we can add additional constraints to create cells. We know that some coordinate must finally be projected away, so we may as well make cells perpendicular to that coordinate

However, this requires calculating the vertices of the polyhedron, which may be exponentially more than the number of constraints.

For example, consider the intersection of 2 polyhedra, $P_1$ and $P_2$ in $\mathbb{R}^2$. If a point $p(x_0,y_0)$ is a vertex of the intersection of $P_1$ and $P_2$, we split $P_1$ into $P_1 \wedge (x<= x_0) $ and $P_1 \wedge (x>=x_0)$. The second one is not a polyhedron, but it can be represented as the part of $P_1$ with $x>=x_0$ but not in $P_1 \cap P_2$. Taking the representation of $P_1 \cap P_2$ as the H representation, the decomposition of $P_1 \cup P_2$ into cells can be found.

FIXME clarify and resize diagram

Arguably, the best decomposition is this case is the following:

Finally, the union of convex polyhedra is now represented as a disjoint union of these cells. (And is represented as a list of polyhedra)

Algorithm for Cell Decomposition

Starting from the formula given as a tree where internal nodes are conjunctions and disjunctions and the leaves are linear rational constraints, we wish to output a cell decomposition for the feasible region of the formula. We create this decomposition exploiting the tree structure of the constraints. Our aim is to obtain an optimal cell decomposition for the set. This


The algorithm simply generalizes the above procedure to conjunctions and disjunctions of polyhedra, and uses the tree structure to do so

Algorithm: CellDecompose
Input: Linear Rational Arithmetic Formula as a Tree
Output: A list of disjoint polyhedra which together cover the same region as the formula
Assume: The internal nodes are only conjunctions and disjunctions, and the leaves are only polyhedral constraints (i.e. equalities and inequalities in matrix form)

If the node is a disjunction, take the pairwise i If the node is a conjunction,

Main Algorithm

It is instructive to consider some important special cases of the algorithm are when the constraints consist of only conjunctions, only disjunctions, or disjunctive or conjunctive normal form.

Only conjunctions

We convert this set of constraints into a Polyhedron to potentially exploit the geometric structure.
In this case, we can apply Gaussian elimination to the equalities first. Then apply projection operations (such as Fourier Motzkin Elimination) to eliminate the output variables and thereby generate a polyhedron in only the input variables. The h representation of this polyhedron is also computed.
A solution exists given the parameters for this case iff the input variables satisfy all the inequalities implied by the polyhedron, and the corresponding values of input variables are given by the formulae derived from the input variables during the elimination. So, the generated code is of the form

if( <H-representation matrix>*input_variables <= 0 )
    output_variables = generatorMatrix * input_variables
Disjunctive normal form

Here each conjunction will be represented as a polyhedron, the aim is to find a minimal representation for the union of these polyhedra which is represented by the disjunctive normal form. We apply the cell decomposition method above to get a representation as a union of disjoint cells. Then we use each cell to generate the code for its polyhedra. After this, the code can simply be concatenated to give the final code eg.

if(<poly-1 condition>)
  output = poly1_mat * input
if(<poly-2 condition>)
  output = poly2_mat * input

We can apply heuristics such as:

  • Order the inputs by the size(volume) of the polyhedral regions they define
  • The conditions can be represented as a decision tree and a reduced decision tree can be constructed

The Main algorithm is a slight generalization of the above algorithms

Pseudocode for the Main Algorithm

Procedure RationalSynthesis
Input: Formula $\phi$, List of input variables $I$ , List of output variables $O$
Output: Code for a function which takes $I$ as input and uses these values to compute values for variables in $O$ such that $\phi$ is satisfied
Algorithm: The input formula is assumed to be in the above normal form. The Algorithm proceeds in three steps.

  • Simplification The input formula is treated as representing a union of polyhedra given in the H representation. We apply the following techniques to eliminate redundant constraints:
    • Split the polyhedral region into cells as described in the previous section. The number of cells can be exponential in the number of constraints in the worst case.
    • Remove redundant constraints,i.e. constraints which are always implied by the remaining set of constraints. This can be done by calling an lp solver to check if a given constraint is redundant.
  • Projection

The formula is represented as a list of disjoint polyhedra, i.e. as a disjunctive normal form. However, our representation is better because it the polyhedra are disjoint We build the projection onto the set $I$ by merging the projections of each cell.

If the formula $\phi = \phi_1 \vee \phi_2$, Then we must take the union of the projections of $\phi_1$ and $\phi_2$. This is just the union of the polyhedra representing $\phi_1$ and $\phi_2$ and simplify it. If the formula $\phi = \phi_1 \wegde \phi_2$, then we consider pairwise the polyhedra that make up the $\phi_1$ and $\phi_2$ and, possibly, simplify this conjunction If the formula $\phi$ is a linear constraint, i.e. $ A x \le b $ then, we use represent this solution as a polyhedron and project onto $O$ using Fourier-Motzkin Elimination.

  • Code Generation

The intermediate representation in which the code is returned is a list of polyhedra in which the output variables have been projected out. As the representation at this stage is in fact just DNF, the code generation strategy is the same. However, the difference is that the polyhedra are now disjoint.

Given this list of polyhedra, we generate the decision trees for each polyhedron, using the H-representations. Then merge these decision trees. It is possible to make reduced decision trees.

For one polyhedron, the output is linear in the input variables if(<condition on input variables>) then output_variables = matrix * input_variables Once the representation of the projection in terms of the input paramters has been obtained, It must be converted into a decision tree, i.e. if-then-else tree, from which code can be generated. For code generation, we do the


  • The decision trees can be created and propagated upward in Step 2 instead of representing as polyhedra
  • To solve conjunctions, we can use a constructive version of Helly's theorem to find a point in the intersection.
  • Instead of projecting the whole set, project the convex hull of the set of inequalities, and output any extremal point of this projection.
  • Block elimination instead of elminating variables one by one


Notes about the implementation

For solving Linear programs, lp_solve java interface was used. (lp_solve)

Summer School on Convexity

The scribed notes of the Summer School have been stored at EPFL Summer School on Convexity And Spectral Algorithms

Helly's_theorem can be used, if the number of conjunctions is large as compared to the dimension, to efficiently decide if the intersection of sets is non-empty. For example, to find if the intersection of a set $S$ of convex sets with $|S|=n$ in $\mathbb{R}^d$ is nonempty, we consider every set of $d+1$ sets in $S$ and consider their intersection. If the intersection of every set of $d+1$ vertices is nonempty, then all the sets must intersect.

TODO: Traditional Proof of Helly's theorem is an existence proof, i.e. it proves that a point exists, but does not describe how to find the point. Find a constructive proof and see if it can be efficiently used.

Proof for a property of Convex Hull Extreme Point Algorithm

Consider the convex hull of a set $S$ of (not necessarily disjoint) polyhedra. We prove that any extreme point of this convex hull is an extreme point of some polyhedron in $S$.

For simplicity, let $S$ contain only polyhedra $P_1$ and $P_2$ A point $p$ is called an extreme point if for any $\lambda_1, \lambda_2 $ such that $\lambda_1 + \lambda_2 = 1$ and $p = \lambda_1 p_1 +\lambda_2 p_2   \Rightarrow p=p1=p2$

Clearly, if a point is an extreme point of the convex hull and belongs to some polyhedron, then it is an extreme point of that polyhedron.

Consider a point $p = \lambda_1 p_1 +\lambda_2 p_2 $ with $\lambda_1 + \lambda_2 = 1$ and $p_1$ and $p_2$ belong to some polyhedra in $S$
Then, by definition, $p=p_1=p_2$ so that p belongs to some polyhedra in $S$ and is also an extreme point of that polyhedron.

The above definition and the proof can be trivially extended to multiple polyhedra

Note: The extreme points of the convex hull are its vertices, if it is topologically closed.

The above result can help us quickly check for the existence of solutions. We can calculate the convex hull of the set of constraints and return any extremal point.

Grobner Basis Approach for Integer Arithmetic