LARA

The MiniJava+ Compiler Race

To celebrate the completion of the first basic version of your MiniJava+ compilers, we are organizing a competition.

We will test all your compilers for correctness against a series of tests (some of which should pass, while others should fail). The tests will cover all phases of your compiler, not only code generation (although the correctness of the execution will be the criterion for all passing examples). In separate tracks, we will measure the size of the generated code and the total execution time of your compiler. We're not measuring execution time anymore: for technical reasons, the differences recorded were too small to call a winner.

The competition will take place on Dec. 1st, 10.15am in INM202.

Test suite

The suite will contain benchmarks coming from three sources:

  1. The benchmarks you have written in the first weeks of the class
  2. Benchmarks from us that we won't publish before the competition
  3. New benchmarks you can submit to us until Nov.30th, 5pm for inclusion. Try to think of how you would crash or confuse your friends' compilers!

Scoring

Your compilers will be measured against javac. Using our own compiler implementation, we will make sure before the competition that the differences between the Java and the MiniJava+ language definitions do not influence the output. Scoring will be as follows:

  • The compiler generates the proper class files, and the execution produces an output that matches javac/java → +1 pt.
  • The compiler correctly rejects an invalid input file → +1 pt.
  • The compiler incorrectly rejects a valid input file → -1 pt.
  • The compiler incorrectly accepts an invalid input file and produces some result → -2 pts.
  • The execution of the generated class files does not match the reference (or the bytecode can not be verified, for instance) → -2 pts.
  • The compiler crashes (an infinite loop is considered as a crash) → -2 pts.

Be aware that if you use the error() method from scala.Predef to output errors, the execution will be considered as a crash (since it terminates by throwing an exception).

Scoring on the “code size” and “execution time” tracks will be done just as you guessed.

Entering

Your Labs 09 submission will automatically be added to the competition (unless you don't want to compete). To preserve anonymity of results, please indicate us in advance the name you want to see displayed for your group's results.

If you find bugs between the submission time and the competition, you are free to fix them and send us a new version to compete, but the grading will still be done based on the version available to us at submission time. In case of modifications, the same deadline as for benchmarks applies.

Submission of benchmarks

Send us the benchmarks you would like to see added to the suite by email1) before the deadline. Make sure you state whether the benchmarks should pass or fail (for instance, send us a pass.zip and a fail.zip file). For benchmarks that should pass, make sure the first line of the file contains the name of the main class in an in-line comment, with no spaces around. You should at least try to send us the benchmarks you wrote to test your compiler: since you know your compiler works on them, you have nothing to lose!

For instance:

//TestSubclass
class TestSubclass {
  public static void main(String [] args) {
    System.out.println("Should say '42': " + new B().getField());
  }
}
 
class B extends A { public int getField() { boolean b; b = this.setValue(); return field; } }
 
class A { int field; public boolean setValue() { field = 42; return true; } }

The benchmarks will be anonymous.

Prizes

The best implementations will be awarded with bonus points for their labs, will be honored on the course web page and may be awarded a symbolic (non-cash) prize.

The grading for Labs 09 will be done separately from the competition. For example, you will not lose points on Lab 09 if your lexer crashes.

1)
to Philippe