Quality assessment of Haskell programs

One of the greatest things about writing code in Haskell is the wonderful libraries (incidently, one of the worst things about writing code in Haskell is the libraries). In particular the libraries for assessing the quality of your own code. I’m especially found of:

  • QuickCheck for a quick testing of the correctness of my code, by letting me specify the properties I want to check, and taking care of generating random test-cases.

  • Criterion for checking the time performance of my code in a statistical sound manner. Taking care of dealing with the garbage collector and such like.

In this post I give a quick tour of how to use these libraries.

Getting the libraries

Our first step is to install QuickCheck and Criterion:

$ cabal install criterion quickcheck

Now we are ready to go.

Backstory

The standard list-based implementation of quicksort that you so often see, seems ripe for some optimisations:

quicksort :: Ord a => [a] -> [a]
quicksort []     = []
quicksort (p:xs) = quicksort lesser ++ [p] ++ quicksort greater
    where
        lesser  = [ x | x <- xs, x < p]
        greater = [ x | x <- xs, x >= p]

One obvious optimisation could be to only traverse xs once, when partitioning the elements into lesser and greater (and equal) than p. Here is one implementation that uses foldl' for maximum performance (we hope):

kquicksort :: Ord a => [a] -> [a]
kquicksort []     = []
kquicksort (p:xs) = kquicksort lesser ++ equal ++ kquicksort greater
    where
        (lesser,equal,greater) = foldl' part ([],[p],[]) xs
        part (l,e,g) x =
          case compare x p of
            LT -> (x : l, e, g)
            GT -> (l, e, x : g)
            EQ -> (l, x : e, g)

Checking some properties

The basic usage of QuickCheck is to specify the properties you care about as ordinary Haskell functions, and then you use the quickCheck function to check the properties. In the following I use QC as the qualified name for values from the QuickCheck library.

The first property we want to check is that kquicksort returns a sorted result. To check this property we also define a helper predicate sorted that checks that a list is sorted:

sorted :: Ord a => [a] -> Bool
sorted (x1:x2:xs) = x1 <= x2 && (sorted $ x2:xs) 
sorted _          = True

prop_sorted :: Ord a => [a] -> Bool
prop_sorted xs = sorted $ kquicksort xs

The next properties that we want to check could be that kquicksort is idempotent and if it is given an ordered argument it doesn’t mess it up:

prop_idempotent :: Ord a => [a] -> Bool
prop_idempotent xs = kquicksort xs == (kquicksort $ kquicksort xs)

prop_ordered :: Ord a => QC.OrderedList a -> Bool
prop_ordered (QC.Ordered xs) = xs == kquicksort xs

For prop_ordered we use the type class OrderedList to specify that this property should only hold for arguments that are sorted.

Now we can use the quickCheck function to check all our properties:

checkAll = do
  QC.quickCheck (prop_sorted :: [Int] -> Bool)
  QC.quickCheck (prop_idempotent :: [Int] -> Bool)
  QC.quickCheck (prop_ordered :: QC.OrderedList Int -> Bool)

The type constraints on all the properties are necessary, because that is what makes overloading works, so that quickCheck can generate random test-cases. Alternatively we can give our properties a less general non-polymorphic type. However, by keeping the more general type for our properties we can check them for other types as well. For instance, we might want to check it for a custom type, like the following type Colour:

data Colour = Red | Green | Blue
            deriving (Eq, Show, Ord)

Before we can use QuickCheck with the Colour type, we need to specify how to generate random values of this type. That is, make it an instance of the Arbitrary type class:

instance QC.Arbitrary Colour where
  arbitrary = QC.elements [Red, Green, Blue]

Now, we can check the prop_sorted property by adding a simple type constraint:

QC.quickCheck (prop_sorted :: [Colour] -> Bool)

Checking the performance

Once we have convinced our-self the code could be correct, we can start worrying about performance. Thus, we turn to Criterion. When doing performance testing there are all sorts of gnarly details that can go wrong and ruin our experiments: we forget to force the lazy evaluation and thus measure the wrong thing, the running time of function we try to measure is to short to get a meaningful reading of with clock resolution on the computer we use, the garbage collector might introduce noise from one sampling into another, we might be using the computer for something while we run the experiments which could introduce noise, and so on.

In the following I use C as the qualified name for values from the Criterion library.

Criterion takes care of all these concerns, and just present us with a simple interface where the only thing we need to specify is which functions we want to run, on what data, and how much we want the result to be evaluated. Thus, to benchmark kquicksort on the list [20, 10, 30] and get a fully evaluated result, we first use the nf (for normal form) function from Criterion to get something Benchmarkable:

C.nf kquicksort [20, 10, 30]

When we have something benchmarkable, we use the bench function to label it and turn it into a Benchmark

C.bench "kquickcheck on short list" $ C.nf kquicksort [20, 10, 30]

When we have a list of benchmark we can hand it over to defaultMain which makes an excellent main body for us that will allow us to configure our benchmark from the command-line without recompiling the program. Without further ado:

makeList :: Int -> [Int]
makeList n = QC.unGen (QC.vector n) (R.mkStdGen 42) 25

main :: IO()
main = do
  checkAll
  let sizes = [ 10000, 20000, 50000, 75000
              , 100000, 250000, 500000, 1000000]
  let inputs = [(n, makeList n) | n <- sizes]
  let benchmarks = [ C.bench (name ++ show n) $ C.nf sort ns 
                   | (n, ns)      <- inputs,
                     (name, sort) <- [("quicksort ",  quicksort),
                                      ("kquicksort ", kquicksort)]]
  C.defaultMain benchmarks

In the helper function makeList I have reused the generator framework from QuickCheck for generating some random test data for my benchmark. For simple integer lists as we have here it is a bit overkill, but for more complex input it can be nice.

The complete program

If you want to find out if kquicksort really is faster than quicksort is here the complete program.

import qualified Criterion.Main as C
import qualified Test.QuickCheck as QC
import qualified Test.QuickCheck.Gen as QC

import Data.List(foldl')
import qualified System.Random as R

quicksort :: Ord a => [a] -> [a]
quicksort []     = []
quicksort (p:xs) = quicksort lesser ++ [p] ++ quicksort greater
    where
        lesser  = [ x | x <- xs, x < p]
        greater = [ x | x <- xs, x >= p]

kquicksort :: Ord a => [a] -> [a]
kquicksort []     = []
kquicksort (p:xs) = kquicksort lesser ++ equal ++ kquicksort greater
    where
        (lesser,equal,greater) = foldl' part ([],[p],[]) xs
        part (l,e,g) x =
          case compare x p of
            LT -> (x : l, e, g)
            GT -> (l, e, x : g)
            EQ -> (l, x : e, g)

sorted :: Ord a => [a] -> Bool
sorted (x1:x2:xs) = x1 <= x2 && (sorted $ x2:xs) 
sorted _          = True

prop_sorted :: Ord a => [a] -> Bool
prop_sorted xs = sorted $ kquicksort xs

prop_idempotent :: Ord a => [a] -> Bool
prop_idempotent xs = kquicksort xs == (kquicksort $ kquicksort xs)

prop_ordered :: Ord a => QC.OrderedList a -> Bool
prop_ordered (QC.Ordered xs) = xs == kquicksort xs

data Colour = Red | Green | Blue
            deriving (Eq, Show, Ord)

instance QC.Arbitrary Colour where
  arbitrary = QC.elements [Red, Green, Blue]

checkAll = do
  QC.quickCheck (prop_sorted :: [Int] -> Bool)
  QC.quickCheck (prop_sorted :: [Colour] -> Bool)
  QC.quickCheck (prop_idempotent :: [Int] -> Bool)
  QC.quickCheck (prop_ordered :: QC.OrderedList Int -> Bool)

makeList :: Int -> [Int]
makeList n = QC.unGen (QC.vector n) (R.mkStdGen 42) 25

main :: IO()
main = do
  checkAll
  let sizes = [ 10000, 20000, 50000, 75000
              , 100000, 250000, 500000, 1000000]
  let inputs = [(n, makeList n) | n <- sizes]
  let benchmarks = [ C.bench (name ++ show n) $ C.nf sort ns 
                   | (n, ns)      <- inputs,
                     (name, sort) <- [("quicksort ",  quicksort),
                                      ("kquicksort ", kquicksort)]]
  C.defaultMain benchmarks

Compile it with the command-line

$ ghc -O3 -W --make Quicksort.hs -o Quicksort

I Am Going to JAOO 2008 As A F# Expert

Microsoft Denmark have invited me to participate in JAOO 2008. If I in return spend some time in the Microsoft stand demoing F# and answering questions about F# and functional programming in general.

When Martin Esmann (Microsoft Academic Developer Evangelist) approached me, I told him that I’d be happy to show up, but I’m not using F# from within Visual Studio. In fact, I’m usually not even using Microsoft’s .NET implementation. I’m a happy Mono user. To Martin’s credit his initial reaction was that it would be cool that I demoed F# using Linux and Mono in the Microsoft stand.

However, after talking it over with his colleges, Martin got back to me and told me that they thought that using Linux and Mono to demo F# would send to mixed a message. I agree with that. So unfortunately I won’t be demoing Mono in a Microsoft stand this time. But I’ll of course bring my own laptop, and might show F# on Mono and Linux if there are questions about portability. Maybe I should contact the Mono guys and ask for a T-shirt, so that I silently can invite questions about Mono 😉

Time to practice some F# demos.  Any suggestions for what would be effective? I’m planning to show of the brand new Units of Measure and also sequence comprehensions.

Decoding Morse Code With F# Comprehensions

In my last post I showed how to decode morse code in Python using list comprehensions. In this post I show how to do it in F# instead.

First using list comprehensions:

let codes =
    [("A",".-");   ("B","-..."); ("C","-.-."); ("D","-.."); ("E",".");
     ("F","..-."); ("G","--.");  ("H","...."); ("I","..");  ("J",".---");
     ("K","-.-");  ("L",".-.."); ("M","--");   ("N","-.");  ("O","---");
     ("P",".--."); ("Q","--.-"); ("R",".-.");  ("S","..."); ("T","-");
     ("U","..-");  ("V","...-"); ("W",".--");  ("X","-..-");("Y","-.--");
     ("Z","--..")]
let rec decode input =
    if input = "" then [""]
    else [ for c, code in codes when input.StartsWith(code)
           for rest in decode(input.Substring(String.length code))
           -> c + rest ]

As it can be seen the code is almost identical to the Python code. Incidentally, I could not find a function equivalent to Python’s startswith method in the O’Caml standard library (without using regular expressions). Fortunately F# came with one from the .NET library.

Much to my the surprise the compiled F# (running on Mono 1.2.4) is 4 times slower that the Python code. I then rewrote the program to use sequence comprehensions:

let rec decode input =
    if input = "" then { -> "" }
    else { for c, code in codes when input.StartsWith(code)
           for rest in decode(input.Substring(String.length code))
           -> c + rest }

This version runs faster, and uses only a constant amount of memory. Still the Python version is three and half times faster.

I then tried to run the programs on an other computer with Microsoft’s .NET implementation. This improved the F# running times a lot. However, they are still 40% to 80% slower than the Python version.

Chart of F# vs Python on morse code decoding

My current guess is that Python is much better at handling strings than .NET.

The actual numbers:

Linux, Mono Vista, MS .NET
Program Time
sec
Ratio Time
sec
Ratio
morse.py 7.94 ± 0.05 1 11.03 ± 0.03 1
morse.fs 33.07 ± 0.19 4.17 19.65 ± 0.33 1.78
morse-seq.fs 28.1 ± 0.59 3.54 16.12 ± 0.36 1.46

Update 2007-11-12 Added test files:

Morse Code Decoding With Python List Comprehensions

As a small exercise for getting up to speed with Python I decided to solve ruby quiz #121, which is to to write a function that finds all possible decodings of a string of Morse codes without letter- and word-separators. Given the nature of the problem I decided to use python’s list comprehensions for the solution.

Without further ado here is the code I ended up with:

#!/usr/bin/env python

import string

letters = [('A',".-"),   ('B',"-..."), ('C',"-.-."), ('D',"-.."), ('E',"."),
           ('F',"..-."), ('G',"--."),  ('H',"...."), ('I',".."),  ('J',".---"),
           ('K',"-.-"),  ('L',".-.."), ('M',"--"),   ('N',"-."),  ('O',"---"),
           ('P',".--."), ('Q',"--.-"), ('R',".-."),  ('S',"..."), ('T',"-"),
           ('U',"..-"),  ('V',"...-"), ('W',".--"),  ('X',"-..-"),('Y',"-.--"),
           ('Z',"--..")]

def decode(input):
    if input == "" :
        return [""]
    else:
        return [ letter + remaining
                 for letter, code in letters if input.startswith(code)
                 for remaining in decode(input[len(code):]) ]

# Some Testing code
def test(s, code):
    if s in decode(code):
        print code + " can be decoded as " + s
    else:
        print code + " can NOT be decoded as " + s

test("SOFIA", "...---..-....-")
test("SOPHIA", "...---..-....-")
test("EUGENIA", "...---..-....-")

Interesting, my solution is rather similar to Patrick Logan’s Erlang solution. And I find it simpler to understand than the Haskell solutions at the HaskellWiki.

Recursive Descent Parsers in C#

Peter Sestoft and I have written a note about how to write scanners and parsers in C#. The note is based on earlier versions for SML and Java.

The note contains an thorough introduction to grammars on Backus–Naur form (BNF). This includes a description of properties your grammar should have so that it can be mechanically translated to a program. And also some prescriptions about how to transform your grammar so that it has the desired properties. In technical terms, the note describe how you can check that your grammar is an LL(1) grammar, and if your grammar is not an LL(1) grammar, we give your some tricks that will usually transform the grammar into an LL(1) grammar.

The parsers you write using our method are recursive descent parsers. For the scanners, however, we just use an add-hoc method. Both parsers and scanners makes good use of the .NET framework. For instance, the scanners creates a token stream from a TextReader. Hence, the scanners can be used to scan both files and strings. Likewise, a token stream is represented as IEnumerable<Token> and scanners uses yield to create this token stream. Thus, creating the token stream lazily.

To give an example, the simplest scanner presented in the note is the following scanner:

using TokenStream = System.Collections.Generic.IEnumerator<Token>;

class ZeroOneScan : IScanner {
  public TokenStream Scan(TextReader reader) {
    while ( reader.Peek() != -1 ) {
      if ( Char.IsWhiteSpace((char) reader.Peek()) )
        reader.Read();
      else
        switch(reader.Read()) {
        case '-': yield return Token.FromKind(Kind.SUB); break;
        case '0': yield return Token.FromKind(Kind.ZERO); break;
        case '1': yield return Token.FromKind(Kind.ONE); break;
        default: throw new ApplicationException("Illegal character");
        }
    }
    yield return Token.FromKind(Kind.EOF);
  }
}

I find this use of yield quite elegant.

Working with this note and getting my name on it has special meaning to me. A precursor note for SML was actually the note Peter used when he taught me for the first time many years ago (on my second semester at university). Over the years, I have returned to the note many times when I have needed to parse a small language that did not warrant the use of a parser generator, or when a generated parser would have been inconvenient to use because the text to be scanned did not come from a file stream (modern parser generators will not generate parsers with this problem).

The note ends a bit to early, in my opinion. I would like extend the note to cover Extended BNF. And I would also like to cover parser combinators. Well, one day when time permits…

ICFP Contest 2006, Team KFL

In 1967, during excavation for the construction of a new shopping center in Monroeville, Pennsylvania, workers uncovered a vault containing a cache of ancient scrolls. Most were severely damaged, but those that could be recovered confirmed the existence of a secret society long suspected to have been active in the region around the year 200 BC.

Based on a translation of these documents, we now know that the society, the Cult of the Bound Variable, was devoted to the careful study of computation, over two millennia before the invention of the digital computer.

Like last year the prospects for my participation in the ICFP Contest was not looking good. None of my team mates from last year’s team seemed to be able to participate, and neither did I myself. The weekend of the contest was packed with family business. And on top of that, when the weekend arrived I was sick Friday night and Saturday.

However, Sunday evening I had some free time and I decided that I would take a crack a the contest just to see what it was about. Judging from the discussion mailing-list it sounded quite fun and interesting. The first phase of the contest task was to implement a 14-instruction virtual machine called UM and when that was running you should use it for running the provided codex for the operating system UMIX.

So I registered my team KFL and started to implement my UM in SML. The first thing I did was to implement an instruction decoder that could translate a 32-bit word into an SML datatype. Then I wrote a function that read in a file of 32-bit words encoded in big-endian as four 8-bit words each. And then maped my decode function over the Vector of words. For this task the SML Basis Library really shined:

fun readFile filename =
    let val dev = BinIO.openIn filename
        val all = BinIO.inputAll dev before BinIO.closeIn dev
        val words = Vector.tabulate(Word8Vector.length all div 4,
                                    fn i => Word32.fromLarge(PackWord32Big.subVec(all,i)))
    in  Vector.map decode words
    end

Time spend: 1 hour.

Unfortunately, this did not work. My decoding function failed after 1675 instructions or so, complaining about illegal instructions. And indeed the 32-bit word it complained about did not seem to encode a legal instruction. I tried to reimplement the conversion from 8-bit words to 32-bit words, in case PackWord32Big worked different than I thought. But I still got the same error. Thus, I gave up and went to bed.

Time spend: 2 hours.

Monday morning I had to see to some other things first, but then I had some time to spend on the contest. Even after I had slept on the problem I still couldn’t figure out what was wrong. So I asked my colleague Arne if he had 10 minutes to help me debug my program. I explained him the problem, showed him my code (actually my debug output, and then we looked at the codex in a hex-editor. He confirmed that based on my explanation, my program appeared to be working correctly, and it looked as if there was an illegal instruction in the codex, if all instructions really was encoded a single 32-word. Hence, one or more of my assumptions had to wrong (it was easy to rule out that the codex was wrong, because more than a hundred teams were able to run the codex). Then it occurred to me, the codex was not required to only contain valid instructions, maybe the code would jump over damaged parts of the codex and part of the contest would be to repair the codex. Thus, I changed my code to only decode instructions on demand, and kept the whole program as an array of 32-bit words. Lo and behold the machine was able to start running the codex! However it failed in the self-check the codex performed. After some debugging I found one place where I used the name of an register (registers in the UM are named by integers) as a value rather than using the value contained in the register. Now my UM was able to run the codex and the SANDmark (a debug and benchmark suite provided by the contest managers).

Time spend: 2 hours.

My first version ran the SANDmark in a bit more than 18 minutes (14 min user and 4 min sys) , 768 seconds user time according to MLton’s profiler. Which was not to bad but I’d seen on the discussion list, that other participants had UMs that ran the SANDmark in a couple of minutes. Thus, I decided to profile my UM to see where the time was spend. To my surprise the top function in the profile was my decode function, a function that took a 32-bit word and translates it to an SML datatype. Here are the first few lines of decode together with the helper function standardRegs that fetches out the register names:

fun standardRegs w =
    let open Word32
        val A = (w << 0w23) >> 0w29
        val B = (w << 0w26) >> 0w29
        val C = andb(w, 0w7)
    in (toInt A, toInt B, toInt C)
    end

fun decode w =
    let open Word32
        val opr = w >> 0w28
    in case opr of
           0w0  => CMove(standardRegs w)
         | 0w1  => ARead(standardRegs w)
         | 0w2  => AWrite(standardRegs w)
         ...

And the top of my interpreter loop looked like this:

      while true do
             case spin() of
                 CMove(A,B,C) => if $C = 0w0 then ()
                                 else A < - $B
               | ARead(A,B,C) => A < - $$B sub (W32.toInt($C))
               | AWrite(A,B,C) => Array.update($$A, W32.toInt($B), $C)
               ...

Where spin is the function that reads the current word at the program counter, updates the program counter, decodes the word, and return the instruction. But how could 19% of the time be spend in the decode. I moved the call to decode from spin to my interpreter loop to aid the MLton optimizers:

      while true do
             case decode(spin()) of

This made the SANDmark 5 minutes faster wall clock time, that is 13 minutes. Or in MLton profiler time 529 seconds. 30% improvement just for moving a function around. Not bad.

Time spend: 30 minutes.

After this optimization my UM was fast enough that I thought I’d try to solve some of the puzzles. So I logged into the UMIX OS using the guest account and started to poke around and collect points. The first real puzzle was to fix a password cracker written in a weird Basic dialect that used roman numerals instead of decimal notation for integer literals (including for the line numbers).

Time spend: 1½ hour. Collected points 230.

Then I had to go home, and while I cooked dinner (I was baking pita bread, and while the dough was rising I had time to hack) I was able to write an improved password cracker—in this weird roman numerals Basic: hack2.bas. This gained me an other 100 points, just before the contest ended (the contest ended at 18:00 in CEST)

Time spend 45 min. Collected points in total 330.

All in all not bad to make 330 points after spending only seven hours and 45 minutes of rather fragmented time.

After dinner I was able to gain an other 35 points by writing a list reversal program in a graphical 2D language: rev.2d. It took half an hour or so.

The setup for the contest was absolutely amazing and most entertaining. My account of it here does not do it justice. An incredible amount of work must have gone into the preparation of it. I’m looking forward for the final debriefing from the Contest Organizers.

Yesterday, I tried for fun to optimize my UM program a bit more. Programs running on the UM are able to allocate and free arrays. In my original implementation I used a ref to a functional Red-Black tree to keep track of the mapping from UM-pointers to arrays. I know, not the best choice of data structure, but I was just trying to get a “good enough” UM up and running. From the profile it was obvious that lots of time and memory was spend on keeping the Red-Black trees balanced. Thus, I replaced this code with an array, and a free-list for reusing UM-pointers (32-bit words). Thus, my code for managing the “heap” when from 12 lines of code (not counting the code in the Red-Black tree library) to 28 lines of code. This small changed made the SANDmark run in 4 minutes wall clock(175 seconds of MLton profiler time) an improvement of almost 67%. Looking at the profile, I could see that decode was again on top of the list (using 42% of the time). Thus, I decided to inline decode and deforest the instruction datatype by hand. This made my code 68 lines smaller, and the SANDmark ran in 2.50 minutes (134 seconds of MLton profiler time), 23% improvement. Almost four times faster than the UM I participated in the contest with. Time spend 1½ hour.

  • Code for the UM I participated with: um2.sml
  • Code with improve heap handling: um3.sml
  • Code with inlined decode function: um4.sml

Refactoring SML Quiz, Part 2

The answer to yesterdays quiz is: Yes, types are necessary for lambda-lifting refactoring. Namely, if the lifted function contains an overloaded operator such as, e.g., +.

For example, given the program:

fun foo x =
    let fun add y = x + y
    in  add 5.0 end

where we want to lift the function add. Our first attempt might be to transform the program into something like (refactoring is a source code to source code transformation):

fun add x y = x + y
fun foo x = add x 5.0

However, this might not compile with every SML compiler. It depends on how much local context the compiler uses to resolve overloading.

Moscow ML compiles the transformed program above just fine. But Moscow ML complains about the following version:

fun add x y = x + y;
fun foo x = add x 5.0

Notice the extra semi-colon after the declaration of add.

Update 2005-12-22:
Just for good messure. Here is one possible result of refactoring with necessary type information:

fun add (x : real) y = x + y;
fun foo x = add x 5.0

I believe that only captured variables needs to be typed.

Also note, that overloading is not the only problem. It is possible to construct a similar example using records, but I’ll leave that to the reader (unless there is a popular and desperate demand, then I can provide an example).

Refactoring SML Quiz, Part 1

Yesterday, I discussed with some students who are implementing an SML plug-in for eclipse, whether types are necessary for a lambda-lifting refactoring for SML.

So today’s quiz is simply: Are types necessary for lambda-lifting refactoring in SML? Why/Why not?

Remember, the refactoring works on valid SML programs, and after the transformation the program should still be valid.

Example:

fun foo x =
    let fun bar y = (x,y)
    in bar x end

is transformed/refactored to

fun bar x y = (x,y)
fun foo x = bar x x

I’ll give the answer tomorrow.

Implementing the generic IEnumerable interface

Say you want to implement a class that implements the IEnumerable interface in C#. Then you have two choices, either to implement the old-style non-generic IEnumerable interface or you can implements the generic IEnumerable<T> interface. Given those choices we of course want to implement the new generic version of the interface. Because otherwise lots of boxing and unboxing will happen when T is a value type, with the non-generic version you will not be able to conveniently use the Current property of your enumerator, and for general type-safety goodness.

Thus, you set out to implement the generic version. For example, let say we want to implement a class that enumerates all the integers staring from a given offset. First we might try this:

using System.Collections.Generic;
class Ints : IEnumerable<int> {
    private readonly int offset;
    public Ints(int o) { offset = o; }
    public IEnumerator<int> GetEnumerator() {
        int i = offset;
        while( true ) yield return i++;
    }
}

But then the compiler complains:

error CS0535: 'Ints' does not implement interface member 'System.Collections.IEnumerable.GetEnumerator()'

Thank you for letting us know nice compiler. But we don’t want Ints to implement the non-generic interface. We want it to implement the generic interface.

Reading up on the documentation will reveal that the generic interface inherits from the non-generic interface. What a wonderful design. Thankfully we can make a general work-around for this design flaw in the library. Just add a non-generic method that calls the generic method:

    System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return GetEnumerator(); }

Once the author of my favorite C# book returns from vacation. I’ll complain to him that the section about IEnumerable didn’t make it clear that if you want to implement the generic interface you also have to implement the non-generic interface.

Update: It turns out that the errata to C# Precisely already mentions this curriousity.

Team “What A Summer Party”, Round 1

I have hangovers.

Not from partying, but from lack of sleep and bad nutrition. The contest was lots of fun. Despite that the end-result for our team was rather disappointing. Below the description I wrote for our team.

The Story Of Team “What A Summer Party”

Round 1

The prospect for the ICFP contest was looking bleak. The net was down and the annual faculty summer party started an hour before the contest. Thus, Ken, Martin, and Henning went to the party and soon forgot all about the contest. The BBQ was sizzling hot and the beers were cold, life was good.

Then suddenly Carsten crashed the party and dumped a huge pile of print-outs with the frightening title “Cops & Robbers, The ICFP 2005 Programming Contest Task”. From that moment the easy life stopped. We started to gulp down huge amounts of candy, chips, and various beverages while we discussed strategies and suggestions for implementation details. Henning had tried to refuse to participate on the team, giving hollow-sounding excuses like: 3-week old baby, twins with ear infections, and exam preparation. But he was sucked in by the exciting discussion, and after a frantic programming session where he tried to hammer out a parser in zero time, his laptop lost the keys “P” and “L”. The rest of us piled useful library modules into the repository.

Saturday, none of us spend much time on the contest. Instead, we spend some quality time with the family, travelled to the other end of the country to party with old friends from high school, and attended unavoidable social obligations. Henning, against all expectations, made a surprise virtual appearance and offered an almost working parser, this gave him a seat on the team, despite that it was against his wishes.

Sunday, we realised that we didn’t have a working cop nor a working robber. We worked hard and was able to make our first submission with working (but non-moving) cop and robber. And a few hours after the submission the first working robber appeared. This robber was looting the banks like crazy (easy because the cops still didn’t move) and there was much rejoicing.

Monday, we just panicked. We managed to put all the pieces for the cop together, and made the final submission just two minutes after deadline (which according the contest mailing list should be acceptable). However, there was no end to the disappointment when we realised that the fancy cop we had assembled in the last minute still didn’t move. After half an hour we had a stupid cop who just walk from bank to bank (making illegal moves along the way, but as we have learned from the movies that is OK for a cop to do, as long as he has good intentions). This, rather dense, cop still managed to catch our super robber. As I said, there was no end to the disappointment.

It is a good thing that there is a round two in two weeks….

The Team consists of the follow persons:

  • Ken Friis Larsen (Release Manager and Storyteller)
  • Martin Elsman (Protector of Good and Right)
  • Carsten Varming (Evil Mastermind)
  • Henning Niss (Cheerleader and Parser constructor)

Thanks for a fun weekend guys.

Review: The Inmates are Running the Asylum

I have seen Alan Cooper’s book The Inmates Are Running The Asylum recommended many places and I finally got around to read it myself.

Overall I liked the book. It is not too long and Alan Cooper is a really good and entertaining writer. However, Alan Cooper spends almost half the book ranting over programmers at large and how bad programmers are at designing user interfaces. But if you are able to take the excessive ranting as entertaining in the style of old-school stand-up comedy (at least, that is how I took it) then Alan Cooper offers some good insights.

The main insights I took with me from the book are:

  • Design Matters Design is one of the pillars of what make users (customers) lust for you product and makes them super-loyal. Design does not relate solely to visual design but also interaction design. The prime example of a master of design is Apple. An other example is Palm PDAs, for years they were technically behind the Windows CE based PDAs both in hardware and software capabilities. Still, the Palms were able to sit on 70% of the market or so. Mainly because the interaction design on the Palms was much better designed. (I don’t know the current status of the PDA market, as I no longer use a PDA.)
  • Homo Logicus Programmers are different from most “normal” people, so different that they are a different species than Homo sapiens. One of the main traits of Homo Logicus is that they wants control, and accepts complexity as trade-off, whereas Homo sapiens wants simplicity, and accepts less control as a trade-off. I think that this is important to remember as a Homo logicus when you make programs for Homo sapiens.
  • Goal-Directed Design and Personas For me, the most useful part of the book is that Alan Cooper describe a method for making good interaction design. Part of this method is that you should define a set of Personas who are the ones you design the program for, usually there will be just one main persona for which you optimise your design. Personas are fictive persons often precisely described: age, gender, profession, favourite ice-cream flavor, etc.. Personas makes perfect sense to me. Just as most (non-fiction?) writers write to a specific person to make the writing process easier and to write to a specific person also gives better results. The goal-directed part means that the design should take offset in the Personas’ goals, not in a specific process or task. Again, this makes perfect sense. To start with a persona’s goal can lead to untraditional and better solutions. Likewise, to be goal-directed also makes it easier to make judgement call about what functionality should be (easily) available in the program. So that it is the functionality that the user needs to solve her goal which is available and not a dog’s breakfast of all the functionality in the program. Remember the Homo sapiens are in majority.
  • The Elastic User If you don’t have Personas then it extremely easy as a programmer to make “the user” elastic. This means, that sometimes you’ll take extra care and make a context sensitive help systems with carefully written pedagogical texts, for instance, because otherwise “the user” can get lost and confused. Later you decide that “the user” will want to be able to customise the colours and fonts of each individual UI element. The reason for the elasticity of “the user” is that when you don’t design for a specific persona, “the user” have to play the role as many different groups of users, and it is easy to lose track of who you are designing for.

For me these are the main insights (that I can remember now), but there are lots of more good stuff in the book and I can wholehearted recommend it.

GADTs in C++

My good friends Claudio Russo and Andrew Kennedy have been kind enough to send me a draft paper about Generalized Algebraic Data Types (GADTs) and Object-Oriented Programming. GADTs generalize the datatypes of ML and Haskell by permitting constructors to produce different type-instantiations of the same datatype. One of Andrew and Claudio’s examples is a slightly modified version of Peter Sestoft’s example of representing abstract syntax trees for typed expressions in Generic C#. To get a better feeling of the difference between the generics in C#/Java and the generics in C++ I have rewritten the the example in C++.

To recap the example: we want to represent abstract syntax trees for a tiny language of simple expressions. Futhermore, we want representation to be type safe. That is, we do not want to be able to represent the nonsense expression “true + 42“. We shall represent the abstract syntax trees the standard OO way with an abstract base class Exp for the general type of expressions and the concreate subclasses for each node type. To enforce the type safety of our embedded language we use generics. That is, a value with type Exp<R> is an abstract syntax tree for an expression that evaluates to a value of type R.

First the base class.

Peter’s Generic C# version:

abstract class Exp<R> {   // R = result type
  abstract public R eval();
}

C++ version:

template< typename R >  // R = result type
class Exp {
public:
  virtual R eval() const = 0;
  virtual ~Exp() {}
};

Note that in C++ we need to remember to define a virtual destructor because we plan to inherit from Exp and we also want to be able to dynamically allocate subclasses of Exp.

Next up is the class for literals.

Peter’s Generic C# version:

class Lit<R> : Exp<R> {
  private readonly R v;
  public Lit(R v) {
    this.v = v;
  }
  public override R eval() {
    return v;
  }
}

C++ version:

template< typename R >
class Lit : public Exp<R> {
public:
  virtual R eval() const {
    return val;
  }
  explicit Lit(R val_) : val(val_) {}
private:
  R const val;
};

No surprises here.

But for binary expressions like plus, for instances, it starts to get a bit more tricky. Here I’ll use a simpler version of the class for representing plus nodes than the version Peter uses.

Generic C# version:

class Plus : Exp<int> {
  private readonly Exp<int> lhs, rhs;
  public Plus(Exp<int> lhs, Exp<int> rhs) {
    this.lhs = lhs; this.rhs = rhs;
  }
  public override int eval() {
    return lhs.eval() + rhs.eval();
  }
}

First cut at a C++ version:

class Plus : public Exp<int> {
public:
  virtual int eval() const {
    return lhs.eval() + rhs.eval();
  }
  explicit Plus(Exp<int> const lhs_, Exp<int> const rhs_)
    : lhs(lhs_), rhs(rhs_)
  {}
private:
  Exp<int> lhs, rhs;
};

Alas, this is not valid C++ because we are not allowed to declare fields and parameters of type Exp<int> because this class has an abstract virtual function, namely eval (and besides if we could, for example by modifying the class Exp, the code would not do what we expects). These problems has nothing to do with the generics in C++, they are just the usual C++ OO quirks. And the solution is straightforward: we must use a pointer to values of type Exp. But then we have to make decisions about ownership: should we make a deep copy of subexpressions? should we share subexpressions? and if so who should take take of freeing resources? should we use ref counting? or what? Here we shall just use one of the the wonderful smart pointers from Boost: shared_ptr which will take care of the refcounting and deletion of expressions. Thus, first we make a couple of convenient typedefs for integer expressions and shared pointers to integer expressions (and similar for boolean expressions):

typedef Exp< int > IntExp;
typedef boost::shared_ptr< IntExp > IntExpPtr;

Then the class for plus nodes looks like this:

class Plus : public IntExp {
public:
  virtual int eval() const {
    return lhs->eval() + rhs->eval();
  }
  explicit Plus(IntExpPtr const & lhs_, IntExpPtr const & rhs_)
    : lhs(lhs_), rhs(rhs_)
  {}
private:
  IntExpPtr lhs, rhs;
};

and a wrapper for constructing shared pointers for new plus expressions:

IntExpPtr plus(IntExpPtr const & lhs, IntExpPtr const & rhs) {
  return IntExpPtr( new Plus( lhs, rhs ) );
}

Similar to the Plus class we can define a class for representing condtional expressions:

Generic C# version:

class Cond<R> : Exp<R> {
  private readonly E<bool> cond;
  private readonly E<R> truth, falsehood;
  public Cond(E<bool> cond, E<R> truth, E<R> falsehood) {
    this.cond = cond; this.truth = truth; this.falsehood = falsehood;
  }
  public override R eval() {
    return cond.eval() ? truth.eval() : falsehood.eval();
  }
}

C++ version:

template < typename R >
class Cond : public Exp<R> {
public:
  typedef boost::shared_ptr< Exp<R> > SubExpPtr;
  virtual R eval() const {
    return cond->eval() ? truth->eval() : falsehood->eval();
  }
  explicit Cond(BoolExpPtr const & cond_,
                SubExpPtr const & truth_,
                SubExpPtr const & falsehood_) :
    cond(cond_), truth(truth_), falsehood(falsehood_)
  {}
private:
  BoolExpPtr cond;
  SubExpPtr truth, falsehood;
};

The interesting things to note about this class are that Cond contains subexpressions of different types, that Cond is polymorphic in the result type, that Cond enforces that the two branches of a condtional expression must have the same type, and the only one of the branchs is evaluated based on the evaluation of the condition.

Using these classes we can define a function that builds an expression for calculating the n‘th fibonacci number:

IntExpPtr fib(int n) {
  IntExpPtr fib0( lit(0) ), fib1( lit(1) );
  switch ( n ) {
  case 0: return fib0;
  case 1: return fib1;
  default:
    for(int i = 1; i != n; ++i) {
      IntExpPtr tmp( fib1 );
      fib1 = plus( fib0, fib1 );
      fib0 = tmp;
    }
    return fib1;
  }
}

This function builds an expression that takes linear, O(n), space, but it will use exponential time, O(2n), to be evaluated.

Conclusion
Non-surprising this GADT example can be translated more or less straight forward from generinc C# to C++. The things that are a bit tedious are just the normal C++ oddities and has nothing to do with generics. In fact, we would have had the exact same problems without generics.

Next up is to try and translate a few more of Claudio and Andrews examples. The statically typed printf and the typed LR parsing looks nifty.

Review: The Art of Interactive Design

I have read Chris Crawford’s book The Art of Interactive Design: A Euphonious and Illuminating Guide to Building Successful Software. This book was quite a bit of an eye-opener for me. Not that I completely agree nor disagree with Chris Crawford, but mostly because it helped me articulate some of my thoughts. Also, while Chris Crawford writes about the design of computer interaction, I think that he explains many useful concepts that can also be used for programming language design and library interface design.

Chris Crawford’s main message with the book is that a new kind of “programmer” is needed. Namely, interaction designers. And he thinks that the interaction designers should be recruited from the Arts and Humanities.

Some of my favorite concepts from the book are:

  • The Definition of Interaction:
    interaction: a cyclic process in which two actors alternately listen, think, and speak.

    Or in a more computer science jargon we can replace listen, think, and speak with input, process, and output. I think this definition is spot on, and it has helped me to think about interaction problems using this definition. And like for humans a polite and pleasent computer program is thoughtful, carefully listens, and doesn’t hold back important information.

  • A Criterion for Interactive Excellence: Crawford gives a nice succinct formular for what is excelent interaction:
    interactive excellence = accessible states / conceivable states

    While the formula from a strictly matematical hair-splitting point of view does not make sense. I think it succintly describes what I have found frustrating–but havn’t been able to articulate–about some computer games and a certain office suite. Namely, that I get some idea about what ought to be possible (conceivable) but for some reason I couldn’t make the damn program do it (not accessible).

  • Colour-to-dirt ratio: A concept originating from game design: Every new feature that a game designer adds to a game increases the game’s colourfulness. But every new feature also entails a certain amount of bureaucratic dirt. And a good game designer will ask herself: “Is the colour I’m adding with this new feature worth all the dirt it is bringing in?”. The example Chris Crawford gives is adding stock speculation to Monopoly. An example in programming language terms could be if we considered to add call-cc to SML. This would definately add colourfulness to the language, lots of features would be possible to add as libraries, for example, lightweight threads. However, call-cc also adds lots of “dirt”, certain optimizations are no longer vailid, for instance.
  • Educational software should be based on simulation:I could not agree more. For many topics, simulation is a really good way of facilitating the learner to formulte her own theory about how something works. This is one of the reasons I work at Laerdal Sophus.

What I don’t like about the book is that Chris Crawford is beating some points to dead, back to life, and then dead again. In particular I think he is completely overdramatising the two culture problem which is the conflict between Arts/Humanities (AH) and Science/Engineering (SE). Perhaps it is because I’m a SE male who have all the power and is the source of the problem (according to Chris Crawford) and not part of the solution. Perhaps it is because the two culture problem is bigger in the US than in Denmark, which I do think is the case. I have seen references to this conflict in serveral US movies and series, and have sometimes wondered about it before I saw the problem described for the forst time in Chris’ book. However, I think is poor that the only suggestion Chris Crawford has for the SE male who wants to make successful interation design is to show more interest in more kinds of art. I guess I just don’t get it.

Nevertheless I really like the book and Chris Crawford’s in-your-face writing style. A sample of Chris Crawford’s writing can be found on GameDev.net as a featured article. The sample is a chapter from one of his other books on computer game design, and it gives a good illustration of his writing style, and is a good read to boot.

[I feel I have (had?) much more to say about The Art of Interactive Design, but this reveiw have been sitting in my Drafts queue for almost two months now. So here your have it. Maybe there will be a part two later…]

SQLite for Moscow ML

Stop The Press! Henning has started to make a binding of SQLite for Moscow ML. This absolutelly great news. I’m looking so much forward to play with this binding (hint, hint, Henning).

At work we are using SQLite with great success. I think that SQLite fills an important, but somewhat overlooked, niche: a small, efficient, and easy to embed relational database with fairly complete support for SQL.

I hope that Henning plans to relase it under a license, so that it might be possible to use the binding in a closed source program…

Together with mGTK this binding will open up lots of interesting possibilities. For instance, Henning talked about using the binding for poking into the Beagle database. Nifty stuff.

Update 2005-01-28:Henning has given me access to his darcs repository for the binding.

Lambda-DAG

Yesterday I attended a talk by Olin Shivers about a clever way to representing lambda terms as DAGs.

Olin Shivers talking about representing Lambda terms as DAGs
Olin talking about uplinks. (Yes, I mainly took the picture to test my new camera mobile phone.)

It was a nifty technique Olin presented clever but simple. After the talk you had a feeling that you completely understood the technique, and it seemed useful for a large class of problems. The main drawback is that the data structure is ephemeral (that is, not persistent), which means that when you perform an update you destructively update the term.

My backup script works

Earlier I wrote about my home-rolled backup script. Yesterday, I got the unfortunate optunity to test for real if the script worked as inteded, as my home partition’s filesystem was corrupted.

I’m happy to report that the script worked perfectly. I got all my data back without any data loss 😀 The thing that took the longest time was to convince myself that no data was lost and that the backup really was better than the filesystem that had been fixed by fsck.

The only thing which is nagging me is that the corruption appears to have happened right after I took the backup. I just hope that my script isn’t to be blamed. But how could it? I run it as an normal user…

That is the Xmas spirit

Today Maria took care of most of the Xmas present for the children we know. We left the presents in the trunk of our car while we had dinner, so that Kamille would see them. After Kamille was put to bed I went out to get the presents. What I found was our car with the smacked window to trunk and no presents. Nice.

The burglars only got a bunch of children presents, mostly for younger children. Thus, they didn’t get anything of high value, they even left the car radio.

Now we properly won’t have time to go out an buy new presents and send them to our friends overseas in time for Xmas. Way, way, way annoying. We are not looking forward to all the insurance hassle and so on.

Search Engines

Would it not be great if Citeseer was more like Amazon?

Don’t get me wrong I think that Citeseer is an extreamly valuable tool, and I would not like to be without it. I just think that it has potential to be much more useful. Imagine how nice it would be if when you looked at paper you also got a small section “Researchers who found this paper interesting also liked:“. Likewise, it woulde be nice if you could get a page of recommondations after you had browsed around for a while. Of course Amazon has better data because they can more reliably track if people liked an item (if “like” is defined as “bought”:-)), and I don’t have much confidence in rating systems.

While on the subject of search eninges. I’ve played around with Google Scholar. I don’t think it can replace Citeseer, yet, but it is a great supplement. It is also interesting to compare the different results you get from Citeseer and Google Scholar if you feed them the same search phrase. Based on a few tests it looks like Google Scholar is better for finding interesting papers, which you can then feed into Citeseer to get more details. Anyway, I’m looking forward to use Google Scholar in anger for my next paper.

A Unicode editor for Michael

Michael claims that he does not have an editor that can handle Unicode.

Thus, Henning and I whipped up an editor using mGTK that can handle Unicode. Oh, and did I mention that you can it compile with either Moscow ML or MLton without changing the source?

The real story is of course that I we were trying to build a “real” application using mGTK, and the editor example shows that the gtk+ widgets handle Unicode fine. Wereas I’m not sure that SML handles Unicode “fine”, String.size does not return the number of characters but the number of bytes. But at least TextIO does not mess up the bytes (in Moscow ML at least, didn’t test with MLton).

Oh, and I used file and gedit to check that the file I saved really was Unicode.

More mGTK Progress

Henning and I have been working hard (leisure time hard, that is) on mGTK since my last mGTK progress report. We have ported all the interesting Gtk# examples from Mono: A Developer’s Notebook Chapter 4.

screenshot of four mGTK examples

Currently the only missing bits are better support for the lower levels of the library stack most notably GDK, but also Glib, ATK, and Pango. Which means that there are no pretty monkeys and the drag and drop example does not compile, yet. The source code for the examples can be found in the mGTK CVS.

Next up, is polish of the bindings generator for MLton which is currently not as well as the Moscow ML generator. The only thing missing is support for out-arguments. However, Henning is on the case, so the problems should be solved within days(?!?). A release is sooo close. That fells really good because it has taken three years to come this far.

callcc for Moscow ML

The last few days I have tried to recreate my implementation of callcc for Moscow ML. My original (incomplete) implementation was lost last fall when my old laptop was stolen (or it is still lurking somewhere in mess which is my old backup system).

The code almost works. For example, the following code, which just ignores the continuation value k:

3 + callcc (fn k => 2 + 1);

Gives 6, as it should.

However, the following more interesting example, which actually uses the continuation value:

3 + callcc (fn k => 2 + throw (k, 1));

Gives 67299815 and not 4. Ouch!

Likewise, given the following declaration of multiply:

fun multiply ints =
    callcc (fn ret =>
    let
        fun mult nil = 1
          | mult (0::_) = throw (ret, 0)
          | mult (n::ns) = n * mult ns
    in
	mult ints
    end)

The expression:

multiply [1,2,3,4,0,5];

evaluates to 67299810 and not 0. Double ouch!!

My guess is that I somehow mess things up when I restore the stack, and some pointer ends up on top of the stack. But I’m too tired to debug this futher tonight.

One nifty feature of my code (if I can get it to work) is that it is implemented using the Dynlib library. Which means that no changes are needed to the original Moscow ML code, it is all in a seperate library.

The birth of Naja

A bit delayed, but here is the account of Naja’s birth.

Maria started to have contractions Monday (the 13’th) morning. While the contractions where painful, they where also short of of duration and irregular in frequency. In the evening Kamille showed the first symptoms of chickenpox.

Tuesday morning Kamille was showing quite a few chickenpox. Maria continued to have irregular and short of duration contractions during the day.

The night to Wednesday was horrible: Maria’s contractions got worse and so did Kamille’s chickenpox. Thankfully, my parents was able to come and pick up Kamille in the morning.

At 11 o’clock Wednesday (the 15’th) the water broke and the contractions got worse, longer in duration, and with higher frequency. At 12:30 we went to the hopital and at 15:10 Naja was born. The birth went well with no complications, and the midwife and midwife student were really nice.

Naja latched on to Maria

Naja latched on to Maria without any problems and started to suck away immediately, within 10 mins after the birth. Naja’s weight was 3806g and she was 54cm long.

Naja and Maria

At six o’clock Kamille, our parents and siblings came to greet Naja welcome to the world. Kamille demanded right away to hold Naja, and that wish was granted.

Kamille and Naja

Maria, Naja, and I spend the night at the hospital in a so-called “family-room” (Kamille wasn’t allowed at that ward because of the chickenpox. At 9:30 Thursday morning we left the hospital and I picked Kamille up at my parents. Dispite the chickenpox Kamille had an enourmous reserve of love for her littlesister. The first few days was a bit stressful because of the chickenpox (Naja was an angel). On saturday the chikenpox finally got better, and so did family life.

Kamille kissing Naja

I have uploaded an album with 43 pictures from the birth.