Inside 206-105

Existential Pontification and Generalized Abstract Digressions

The AST Typing Problem

This Lambda the Ultimate post (dated 2010) describes a rather universal problem faced by compiler writers: how does one go about adding “extra information” (e.g. types) to an AST? (The post itself divides the problem into three components: adding the information to the data types, using the information to inform the construction of the node, and using the information to inform the destruction of a node—but I’m really only interested in the question of how you define your data type, not do things to it.) In this post, I want to sum up ways of solving the problem which were described in this post, and also take a look at what some real world compilers do. The running example lambda calculus looks like the following:

data Exp = Num Int
         | Bool Bool
         | Var Var
         | If Exp Exp Exp
         | Lambda Var Exp
         | App Exp Exp
data Type = TyInt | TyBool | TyArrow Type Type

Separate IR where nodes are decorated with types

The low-tech solution: if you need a new version of the IR with more information, just define a new IR type where each node can also carry the information. A trick to make these definitions more concise is to make a mutually recursive data structure. [1]

type TExp = (TExp', Type)
data TExp' = TNum Int
           | TBool Bool
           | TVar Var
           | TIf TExp TExp TExp
           | TLambda Var TExp
           | TApp TExp TExp

Despite (or perhaps because of) it’s simplicity, this approach is extremely popular among many compilers, especially in the ML community. A few examples include OCaml (parsetree/typedtree), MLton (AST/CoreML) and Ikarus Scheme. Part of the reason for this is that the transition from frontend language to typed language also comes with some other changes, and when a new AST is defined those changes can be combined in too.

Nullable field

The unprincipled solution: use one AST, but have an optional field in which you can slot in the information. [2]

type TExp = (TExp', Maybe Type)
data TExp' = TNum Int
           | TBool Bool
           | TVar Var
           | TIf TExp TExp TExp
           | TLambda Var TExp
           | TApp TExp TExp

Presented without further comment.

Explicit typing

While closely related to the separate IR solution, an explicitly typed IR takes the approach of not decorating each node with a type, but arranging that the type of any given node can be quickly computed using only local information. [3]

data TExp = TNum Int
          | TBool Bool
          | TVar Var
          | TIf TExp TExp TExp
          | TLambda Var Type TExp
          | TApp TExp TExp

Here, the difference between TExp and Exp is very slight; the TLambda is annotated with an explicit type for the binder. As far as type-checking is concerned, this makes a world of difference: we no longer need to look outside a lambda to figure out what the binder could be.

Forcing your IR to be explicitly typed is often a good idea for metatheoretic reasons, as complicated type systems often don’t have decidable inference algorithms. Both GHC’s core IR, Ur/Web's core and Coq are explicitly typed in this way.

Two-level types

By deferring when you tie the knot of a recursive data-structure, you can arrange for the base functor to do double-duty for the untyped and typed representations. [4]

data ExpF a = Num Int
            | Bool Bool
            | Var Var
            | If a a a
            | Lambda Var a
            | App a a
newtype Exp = Exp (ExpF Exp)
newtype TExp = TExp (ExpF TExp, Type)

The Coq kernel uses this to define its expression type, although it doesn’t use it to define an untyped variant.

(Lazy) Attribute grammars

I don’t claim to understand this approach too well, but essentially it is a programming model distinct from usual algebraic data types which associates attributes over nodes of a tree. In some sense, it can be thought as a memoized function from AST nodes to the attributes. Many compilers do utlize maps, but only for top-level declarations. [5]

Closing remarks

There were a few things that I did not mention here which came up in the discussion. One participant suggested using polymorphic variants to define the data type; this doesn’t help much with adding extra information but allows for different ways of writing traversal functions. Indeed, traversal is one of the big concerns, and the mention of generic programming also is targeted at this problem.

As for my preference? It’s hard to say. I’ve worked with compilers mostly written in the “define a new IR style”, and while the initial outlay of defining two data structures is quite annoying, it is mostly a fixed cost. What’s yours?

Also, a question. Is there a presentation of the conventional set of annotations needed to get explicitly typed System F?

Anatomy of an MVar operation

Adam Belay (of Dune fame) was recently wondering why Haskell’s MVars are so slow. “Slow?” I thought, “aren’t Haskell’s MVars supposed to be really fast?” So I did some digging around how MVars worked, to see if I could explain.

Let’s consider the operation of the function takeMVar in Control.Concurrent.MVar. This function is very simple, it unpacks MVar to get the underlying MVar# primitive value, and then calls the primop takeMVar#:

takeMVar :: MVar a -> IO a
takeMVar (MVar mvar#) = IO $ \ s# -> takeMVar# mvar# s#

Primops result in the invocation of stg_takeMVarzh in PrimOps.cmm, which is where the magic happens. For simplicity, we consider only the multithreaded case.

The first step is to lock the closure:

("ptr" info) = ccall lockClosure(mvar "ptr");

Objects on the GHC heap have an info table header which indicates what kind of object they are, by pointing to the relevant info table for the object. These headers are also used for synchronization: since they are word-sized, they can be atomically swapped for other values. lockClosure is in fact a spin-lock on the info table header:

EXTERN_INLINE StgInfoTable *lockClosure(StgClosure *p)
{
    StgWord info;
    do {
        nat i = 0;
        do {
            info = xchg((P_)(void *)&p->header.info, (W_)&stg_WHITEHOLE_info);
            if (info != (W_)&stg_WHITEHOLE_info) return (StgInfoTable *)info;
        } while (++i < SPIN_COUNT);
        yieldThread();
    } while (1);
}

lockClosure is used for some other objects, namely thread state objects (stg_TSO_info, via lockTSO) and thread messages i.e. exceptions (stg_MSG_THROWTO_info, stg_MSG_NULL_info).

The next step is to apply a GC write barrier on the MVar:

if (info == stg_MVAR_CLEAN_info) {
    ccall dirty_MVAR(BaseReg "ptr", mvar "ptr");
}

As I’ve written before, as the MVar is a mutable object, it can be mutated to point to objects in generation 0; thus, when a mutation happens, it has to be added to the root set via the mutable list. Since mutable is per capability, this boils down into a bunch of pointer modifications, and does not require any synchronizations. Note that we will need to add the MVar to the mutable list, even if we end up blocking on it, because the MVar is a retainer of the thread (TSO) which is blocked on it! (However, I suspect in some cases we can get away with not doing this.)

Next, we case split depending on whether or not the MVar is full or empty. If the MVar is empty, we need to block the thread until the MVar is full:

/* If the MVar is empty, put ourselves on its blocking queue,
 * and wait until we're woken up.
 */
if (StgMVar_value(mvar) == stg_END_TSO_QUEUE_closure) {

    // We want to put the heap check down here in the slow path,
    // but be careful to unlock the closure before returning to
    // the RTS if the check fails.
    ALLOC_PRIM_WITH_CUSTOM_FAILURE
        (SIZEOF_StgMVarTSOQueue,
         unlockClosure(mvar, stg_MVAR_DIRTY_info);
         GC_PRIM_P(stg_takeMVarzh, mvar));

    q = Hp - SIZEOF_StgMVarTSOQueue + WDS(1);

    SET_HDR(q, stg_MVAR_TSO_QUEUE_info, CCS_SYSTEM);
    StgMVarTSOQueue_link(q) = END_TSO_QUEUE;
    StgMVarTSOQueue_tso(q)  = CurrentTSO;

    if (StgMVar_head(mvar) == stg_END_TSO_QUEUE_closure) {
        StgMVar_head(mvar) = q;
    } else {
        StgMVarTSOQueue_link(StgMVar_tail(mvar)) = q;
        ccall recordClosureMutated(MyCapability() "ptr",
                                         StgMVar_tail(mvar));
    }
    StgTSO__link(CurrentTSO)       = q;
    StgTSO_block_info(CurrentTSO)  = mvar;
    StgTSO_why_blocked(CurrentTSO) = BlockedOnMVar::I16;
    StgMVar_tail(mvar)             = q;

    jump stg_block_takemvar(mvar);
}

A useful thing to know when decoding C-- primop code is that StgTSO_block_info(...) and its kin are how we spell field access on objects. C-- doesn’t know anything about C struct layout, and so these “functions” are actually macros generated by utils/deriveConstants. Blocking a thread consists of three steps:

  1. We have to add the thread to the blocked queue attached to the MVar (that’s why blocking on an MVar mutates the MVar!) This involves performing a heap allocation for the linked list node as well as mutating the tail of the old linked list.
  2. We have to mark the thread as blocked (the StgTSO modifications).
  3. We need to setup a stack frame for the thread so that when the thread wakes up, it performs the correct action (the invocation to stg_block_takemvar). This invocation is also responsible for unlocking the closure. While the machinery here is pretty intricate, it’s not really in scope for this blog post.

If the MVar is full, then we can go ahead and take the value from the MVar.

/* we got the value... */
val = StgMVar_value(mvar);

But that’s not all. If there are other blocked putMVars on the MVar (remember, when a thread attempts to put an MVar that is already full, it blocks until the MVar empties out), then we should immediately unblock one of these threads so that the MVar can always be left in a full state:

    q = StgMVar_head(mvar);
loop:
    if (q == stg_END_TSO_QUEUE_closure) {
        /* No further putMVars, MVar is now empty */
        StgMVar_value(mvar) = stg_END_TSO_QUEUE_closure;
        unlockClosure(mvar, stg_MVAR_DIRTY_info);
        return (val);
    }
    if (StgHeader_info(q) == stg_IND_info ||
        StgHeader_info(q) == stg_MSG_NULL_info) {
        q = StgInd_indirectee(q);
        goto loop;
    }

There is one interesting thing about the code that checks for blocked threads, and that is the check for indirectees (stg_IND_info). Under what circumstances would a queue object be stubbed out with an indirection? As it turns out, this occurs when we delete an item from the linked list. This is quite nice, because on a singly-linked list, we don't have an easy way to delete items unless we also have a pointer to the previous item. With this scheme, we just overwrite out the current item with an indirection, to be cleaned up next GC. (This, by the way, is why we can't just chain up the TSOs directly, without the extra linked list nodes. [1])

When we find some other threads, we immediately run them, so that the MVar never becomes empty:

// There are putMVar(s) waiting... wake up the first thread on the queue

tso = StgMVarTSOQueue_tso(q);
StgMVar_head(mvar) = StgMVarTSOQueue_link(q);
if (StgMVar_head(mvar) == stg_END_TSO_QUEUE_closure) {
    StgMVar_tail(mvar) = stg_END_TSO_QUEUE_closure;
}

ASSERT(StgTSO_why_blocked(tso) == BlockedOnMVar::I16); // note: I16 means this is a 16-bit integer
ASSERT(StgTSO_block_info(tso) == mvar);

// actually perform the putMVar for the thread that we just woke up
W_ stack;
stack = StgTSO_stackobj(tso);
PerformPut(stack, StgMVar_value(mvar));

There is one detail here: PerformPut doesn’t actually run the thread, it just looks at the thread’s stack to figure out what it was going to put. Once the MVar is put, we then wake up the thread, so it can go on its merry way:

// indicate that the MVar operation has now completed.
StgTSO__link(tso) = stg_END_TSO_QUEUE_closure;

// no need to mark the TSO dirty, we have only written END_TSO_QUEUE.

ccall tryWakeupThread(MyCapability() "ptr", tso);

unlockClosure(mvar, stg_MVAR_DIRTY_info);
return (val);

To sum up, when you takeMVar, you pay the costs of:

  • one spinlock,
  • on order of several dozen memory operations (write barriers, queue twiddling), and
  • when the MVar is empty, a (small) heap allocation and stack write.

Adam and I puzzled about this a bit, and then realized the reason why the number of cycles was so large: our numbers are for roundtrips, and even with such lightweight synchronization (and lack of syscalls), you still have to go through the scheduler when all is said and done, and that blows up the number of cycles.


[1] It wasn’t always this way, see:

commit f4692220c7cbdadaa633f50eb2b30b59edb30183
Author: Simon Marlow <marlowsd@gmail.com>
Date:   Thu Apr 1 09:16:05 2010 +0000

    Change the representation of the MVar blocked queue

    The list of threads blocked on an MVar is now represented as a list of
    separately allocated objects rather than being linked through the TSOs
    themselves.  This lets us remove a TSO from the list in O(1) time
    rather than O(n) time, by marking the list object.  Removing this
    linear component fixes some pathalogical performance cases where many
    threads were blocked on an MVar and became unreachable simultaneously
    (nofib/smp/threads007), or when sending an asynchronous exception to a
    TSO in a long list of thread blocked on an MVar.

    MVar performance has actually improved by a few percent as a result of
    this change, slightly to my surprise.

    This is the final cleanup in the sequence, which let me remove the old
    way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the
    new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent).  It
    is now the case that only the Capability that owns a TSO may modify
    its state (well, almost), and this simplifies various things.  More of
    the RTS is based on message-passing between Capabilities now.

HotOS “Unconference” report:
Verifying Systems

Ariel Rabkin has some code he'd like to verify, and at this year’s HotOS he appealed to participants of one “unconference” (informal breakout sessions to discuss various topics) to help him figure out what was really going on as far as formal verification went.

He had three questions: "What can we verify? What is impossible to verify? How can we tell that the verification is correct?" They seemed like impossibly large questions, so we drilled in a little bit, and found that Ariel had the very understandable question, "Say I have some C++ code implementing a subtle network protocol, and I'd like to prove that the protocol doesn't deadlock; how do I do that?"

I wish the formal verification community had a good answer to a question like this, but unfortunately we don't. The largest verification projects include things like verified kernels, which are written in fully specified subsets of C; which assume the translation performed by the compiler is correct, formalize C in a theorem prover, and then verify there. This is the "principled approach". It's just not feasible to take C or C++ in its entirety and try to formalize it; it's too complicated and too ill-specified. The easiest thing to do is formalize a small fragment of your algorithm and then make a hand-wavy argument that your implementation is adequate.

Martin Abadi remarked that before you embark on a verification project, you have to figure out where you'll get the most bang for your buck. Most of the time, a formalization won't get you "full correctness"; the "electrons may be faulty", as the case may be. But even a flawed verification forces you to state your assumptions explicitly, which is a good thing.

We then circled around to the subject of, well, what can be verified. Until the 90s, the formal verification community limited itself to only complete and sound analyses—and failed. The relaxation of this restriction lead to a renaissance of formal verification work. We talked about who was using formal verification, and the usual suspects showed up: safety critical software, cache coherence protocols (but one participant remarked that this was only a flash in the pan, as far as applications goes—he asserted that these companies are likely not going to use these methods any longer in the future), etc. Safety critical software is also likely to use coprocessors (since hardware failure is a very real issue), but Gernot Heiser noted that these folks are trying to get away from physical separation: it is expensive in terms of expense, weight and energy. Luckily, the costs of verification, as he recounted, are within a factor of two of normal industrial assurance, and half the cost of military assurance (though, he cautioned that this was for a specific project, and for a specific size of code.) He also remarked that as far as changes to code requiring changes to the proofs, the changes in the proofs seemed to be linear in the complexity (conceptual or implementation-wise) of the change, which is a good sign!

Well, supposing that you decide that you actually want to verify your software, how do you go about doing it? Unfortunately, it takes a completely different set of skills to build verified software versus normal software. Everyone agreed, "Yes, you need to hire a formal methods guy" if you're going to make any progress. But that's just not enough. The formal methods guy has to talk to the systems guy. Heiser recounted a very good experience hiring a formal methods person who was able to communicate with the other systems researchers working on the project; without this line of communication, he said, the project likely would have failed. And he mentioned another project, which had three times as much funding, but didn't accomplish nearly as much their team had. (Names not mentioned to protect the guilty.)

In the end, it seemed that we didn’t manage to give Ari a quite satisfactory answer. As one participant said, “You’ll probably learn the most by just sitting down and trying to formalize the thing you are interested in.” This is probably true, though I fear most will be scared off by the realization of how much work it actually takes to formalize software.


Hey guys, I’m liveblogging HotOS at my research Tumblr. The posts there are likely to be more fragmented than this, but if people are interested in any particular topics I can inflate them into full posts.

Category theory for loop optimizations

Christopher de Sa and I have been working on a category theoretic approach to optimizing MapReduce-like pipelines. Actually, we didn’t start with any category theory—we were originally trying to impose some structure on some of the existing loop optimizations that the Delite compiler performed, and along the way, we rediscovered the rich relationship between category theory and loop optimization.

On the one hand, I think the approach is pretty cool; but on the other hand, there’s a lot of prior work in the area, and it’s tough to figure out where one stands on the research landscape. As John Mitchell remarked to me when I was discussing the idea with him, “Loop optimization, can’t you just solve it using a table lookup?” We draw a lot of inspiration from existing work, especially the program calculation literature pioneered by Bird, Meertens, Malcom, Meijer and others in the early 90s. The purpose of this blog post is to air out some of the ideas we’ve worked out and get some feedback from you, gentle reader.

There are a few ways to think about what we are trying to do:

  • We would like to implement a calculational-based optimizer, targeting a real project (Delite) where the application of loop optimizations can have drastic impacts on the performance of a task (other systems which have had similar goals include Yicho, HYLO).
  • We want to venture where theorists do not normally tread. For example, there are many “boring” functors (e.g. arrays) which have important performance properties. While they may be isomorphic to an appropriately defined algebraic data type, we argue that in a calculational optimizer, we want to distinguish between these different representations. Similarly, many functions which are not natural transformations per se can be made to be natural transformations by way of partial application. For example, filter p xs is a natural transformation when map p xs is incorporated as part of the definition of the function (the resulting function can be applied on any list, not just the original xs). The resulting natural transformation is ugly but useful.
  • For stock optimizers (e.g. Haskell), some calculational optimizations can be supported by the use of rewrite rules. While rewrite rules are a very powerful mechanism, they can only describe “always on” optimizations; e.g. for deforestation, one always wants to eliminate as many intermediate data structures as possible. In many of the applications we want to optimize, the best performance can only be achieved by adding intermediate data structures: now we have a space of possible programs and rewrite rules are woefully inadequate for specifying which program is the best. What we’d like to do is use category theory to give an account for rewrite rules with structure, and use domain specific knowledge to pick the best programs.

I’d like to illustrate some of these ideas by way of an example. Here is some sample code, written in Delite, which calculates an iteration of (1-dimensional) k-means clustering:

(0 :: numClusters, *) { j =>
  val weightedPoints = sumRowsIf(0,m){i => c(i) == j}{i => x(i)};
  val points = c.count(_ == j);
  val d = if (points == 0) 1 else points
  weightedPoints / d
}

You can read it as follows: we are computing a result array containing the position of each cluster, and the outermost block is looping over the clusters by index variable j. To compute the position of a cluster, we have to get all of the points in x which were assigned to cluster j (that’s the c(i) == j condition) and sum them together, finally dividing by the sum by the number of points in the cluster to get the true location.

The big problem with this code is that it iterates over the entire dataset numClusters times, when we’d like to only ever do one iteration. The optimized version which does just that looks like this:

val allWP = hashreduce(0,m)(i => c(i), i => x(i), _ + _)
val allP = hashreduce(0,m)(i => c(i), i => 1, _ + _)
(0::numClusters, *) { j =>
    val weightedPoints = allWP(j);
    val points = allP(j);
    val d = if (points == 0) 1 else points
    return weightedpoints / d
}

That is to say, we have to precompute the weighted points and the point count (note the two hashreduces can and should be fused together) before generating the new coordinates for each of the clusters: generating more intermediate data structures is a win, in this case.

Let us now calculate our way to the optimized version of the program. First, however, we have to define some functors:

  • D_i[X] is an array of X of a size specified by i (concretely, we’ll use D_i for arrays of size numPoints and D_j for arrays of size numClusters). This family of functors is also known as the diagonal functor, generalized for arbitrary size products. We also will rely on the fact that D is representable, that is to say D_i[X] = Loc_D_i -> X for some type Loc_D_i (in this case, it is the index set {0 .. i}.
  • List[X] is a standard list of X. It is the initial algebra for the functor F[R] = 1 + X * R. Any D_i can be embedded in List; we will do such conversions implicitly (note that the reverse is not true.)

There are a number of functions, which we will describe below:

  • tabulate witnesses one direction of the isomorphism between Loc_D_i -> X and D_i[X], since D_i is representable. The other direction is index, which takes D_i[X] and a Loc_D_i and returns an X.
  • fold is the unique function determined by the initial algebra on List. Additionally, suppose that we have a function * which combines two algebras by taking their cartesian product,
  • bucket is a natural transformation which takes a D_i[X] and buckets it into D_j[List[X]] based on some function which assigns elements in D_i to slots in D_j. This is an example of a natural transformation that is not a natural transformation until it is partially applied: if we compute D_i[Loc_D_j], then we can create a natural transformation that doesn’t ever look at X; it simply “knows” where each slot of D_i needs to go in the resulting structure.

Let us now rewrite the loop in more functional terms:

tabulate (\j ->
  let weightedPoints = fold plus . filter (\i -> c[i] == j) $ x
      points = fold inc . filter (\i -> c[i] == j) $ x
  in divide (weightedPoints, points)
)

(Where divide is just a function which divides its arguments but checks that the divisor is not zero.) Eliminating some common sub-expressions and fusing the two folds together, we get:

tabulate (\j -> divide . fold (plus * inc) . filter (\i -> c[i] == j) $ x)

At this point, it is still not at all clear that there are any rewrites we can carry out: the filter is causing problems for us. However, because filter is testing on equality, we can rewrite it in a different way:

tabulate (\j -> divide . fold (plus * inc) . index j . bucket c $ x)

What is happening here? Rather than directly filtering for just items in cluster j, we can instead view this as bucketing x on c and then indexing out the single bucket we care about. This shift in perspective is key to the whole optimization.

Now we can apply the fundamental rule of natural transformations. Let phi = index j and f = divide . fold (plus * inc), then we can push f to the other side of phi:

tabulate (\j -> index j . fmap (divide . fold (plus * inc)) . bucket c $ x)

Now we can eliminate tabulate and index:

fmap (divide . fold (plus * inc)) . bucket c $ x

Finally, because we know how to efficiently implement fmap (fold f) . bucket c (as a hashreduce), we split up the fmap and join the fold and bucket:

fmap divide . hashreduce (plus * inc) c $ x

And we have achieved our fully optimized program.

All of this is research in progress, and there are lots of open questions which we have not resolved. Still, I hope this post has given you a flavor of the approach we are advocating. I am quite curious in your comments, from “That’s cool!” to “This was all done 20 years ago by X system.” Have at it!

The Difference between Recursion & Induction

Recursion and induction are closely related. When you were first taught recursion in an introductory computer science class, you were probably told to use induction to prove that your recursive algorithm was correct. (For the purposes of this post, let us exclude hairy recursive functions like the one in the Collatz conjecture which do not obviously terminate.) Induction suspiciously resembles recursion: the similarity comes from the fact that the inductive hypothesis looks a bit like the result of a “recursive call” to the theorem you are proving. If an ordinary recursive computation returns plain old values, you might wonder if an “induction computation” returns proof terms (which, by the Curry-Howard correspondence, could be thought of as a value).

As it turns out, however, when you look at recursion and induction categorically, they are not equivalent! Intuitively, the difference lies in the fact that when you are performing induction, the data type you are performing induction over (e.g. the numbers) appears at the type level, not the term level. In the words of a category theorist, both recursion and induction have associated initial algebras, but the carrier sets and endofunctors are different. In this blog post, I hope to elucidate precisely what the difference between recursion and induction is. Unfortunately, I need to assume some familiarity with initial algebras: if you don’t know what the relationship between a fold and an initial algebra is, check out this derivation of lists in initial algebra form.

When dealing with generalized abstract nonsense, the most important first step is to use a concrete example! So let us go with the simplest nontrivial data type one can gin up: the natural numbers (our examples are written in both Coq and Haskell, when possible):

Inductive nat : Set := (* defined in standard library *)
  | 0 : nat
  | S : nat -> nat.

data Nat = Z | S Nat

Natural numbers are a pretty good example: even the Wikipedia article on F-algebras uses them. To recap, an F-algebra (or sometimes simply “algebra”) has three components: an (endo)functor f, a type a and a reduction function f a -> a. For simple recursion over natural numbers, we need to define a functor NatF which “generates” the natural numbers; then our type a is Nat and the reduction function is type NatF Nat -> Nat. The functor is defined as follows:

Inductive NatF (x : Set) : Set :=
  | F0 : NatF x.
  | FS : x -> NatF x.

data NatF x = FZ | FS x

Essentially, take the original definition but replace any recursive occurrence of the type with a polymorphic variable. As an exercise, show that NatF Nat -> Nat exists: it is the (co)product of () -> Nat and Nat -> Nat. The initiality of this algebra implies that any function of type NatF x -> x (for some arbitrary type x) can be used in a fold Nat -> x: this fold is the homomorphism from the initial algebra (NatF Nat -> Nat) to another algebra (NatF x -> x). The take-away point is that the initial algebra of natural numbers consists of an endofunctor over sets.

/img/nat-recursion.png

Let’s look at the F-algebra for induction now. As a first try, let’s try to use the same F-algebra and see if an appropriate homomorphism exists with the “type of induction”. (We can’t write this in Haskell, so now the examples will be Coq only.) Suppose we are trying to prove some proposition P : nat -> Prop holds for all natural numbers; then the type of the final proof term must be forall n : nat, P n. We can now write out the morphism of the algebra: NatF (forall n : nat, P n) -> forall n : nat, P n. But this “inductive principle” is both nonsense and not true:

Hint Constructors nat NatF.
Goal ~ (forall (P : nat -> Prop), (NatF (forall n : nat, P n) -> forall n : nat, P n)).
  intro H; specialize (H (fun n => False)); auto.
Qed.

(Side note: you might say that this proof fails because I’ve provided a predicate which is false over all natural numbers. But induction still “works” even when the predicate you’re trying to prove is false: you should fail when trying to provide the base case or inductive hypothesis!)

We step back and now wonder, “So, what’s the right algebra?” It should be pretty clear that our endofunctor is wrong. Fortunately, we can get a clue for what the right endofunctor might be by inspecting the type the induction principle for natural numbers:

(* Check nat_ind. *)
nat_ind : forall P : nat -> Prop,
  P 0 -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n

P 0 is the type of the base case, and forall n : nat, P n -> P (S n) is the type of the inductive case. In much the same way that we defined NatF nat -> nat for natural numbers, which was the combination of zero : unit -> nat and succ : nat -> nat, we need to define a single function which combines the base case and the inductive case. This seems tough: the result types are not the same. But dependent types come to the rescue: the type we are looking for is:

fun (P : nat -> Prop) => forall n : nat, match n with 0 => True | S n' => P n' end -> P n

You can read this type as follows: I will give you a proof object of type P n for any n. If n is 0, I will give you this proof object with no further help (True -> P 0). However, if n is S n', I will require you to furnish me with P n' (P n' -> P (S n')).

We’re getting close. If this is the morphism of an initial algebra, then the functor IndF must be:

fun (P : nat -> Prop) => forall n : nat, match n with 0 => True | S n' => P n' end

What category is this a functor over? Unfortunately, neither this post nor my brain has the space to give a rigorous treatment, but roughly the category can be thought of as nat-indexed propositions. Objects of this category are of the form forall n : nat, P n, morphisms of the category are of the form forall n : nat, P n -> P' n. [1] As an exercise, show that identity and composition exist and obey the appropriate laws.

Something amazing is about to happen. We have defined our functor, and we are now in search of the initial algebra. As was the case for natural numbers, the initial algebra is defined by the least fixed point over the functor:

Fixpoint P (n : nat) : Prop :=
  match n with 0 => True | S n' => P n' end.

But this is just True!

Hint Unfold P.
Goal forall n, P n = True.
  induction n; auto.
Qed.

Drawing out our diagram:

/img/nat-ind.png

The algebras of our category (downward arrows) correspond to inductive arguments. Because our morphisms take the form of forall n, P n -> P' n, one cannot trivially conclude forall n, P' n simply given forall n, P n; however, the presence of the initial algebra means that True -> forall n, P n whenever we have an algebra forall n, IndF n -> P n. Stunning! (As a side note, Lambek’s lemma states that Mu P is isomorphic to P (Mu P), so the initial algebra is in fact really really trivial.)

In conclusion:

  • Recursion over the natural numbers involves F-algebras with the functor unit + X over the category of Sets. The least fixed point of this functor is the natural numbers, and the morphism induced by the initial algebra corresponds to a fold.
  • Induction over the natural numbers involves F-algebras with the functor fun n => match n with 0 => True | S n' => P n' over the category of nat-indexed propositions. The least fixed point of this functor is True, and the morphism induced by the initial algebra establishes the truth of the proposition being inductively proven.

So, the next time someone tells asks you what the difference between induction and recursion is, tell them: Induction is just the unique homomorphism induced by an initial algebra over indexed propositions, what’s the problem?


Acknowledgements go to Conor McBride, who explained this shindig to me over ICFP. I promised to blog about it, but forgot, and ended up having to rederive it all over again.

[1] Another plausible formulation of morphisms goes (forall n : nat, P n) -> (forall n : nat, P' n). However, morphisms in this category are too strong: they require you to go and prove the result for all n... which you would do with induction, which misses the point. Plus, this category is a subcategory of the ordinary category of propositions.

Kindle is not good for textbooks

Having attempted to read a few textbooks on my Kindle, I have solemnly concluded that the Kindle is in fact a terrible device for reading textbooks. The fundamental problem is that, due to technological limitations, the Kindle is optimized for sequential reading. This can be seen in many aspects:

  • Flipping a page in the Kindle is not instantaneous (I don't have a good setup to time how long the screen refresh takes, but there is definitely a perceptible lag before when you swipe, and when the Kindle successfully redraws the screen—and it’s even worse if you try to flip backwards).
  • Rapidly flipping through pages in order to scan for a visual feature compounds the delay problem.
  • There is no way to take the “finger” approach to random access (i.e. wedge your finger between two pages to rapidly switch between them); jumping between bookmarks requires four presses with the current Kindle interface!
  • The screen size of the Kindle is dramatically smaller than that of an average textbook, which reduces the amount of information content that can be placed on one screen and further exacerbates slow page turns.

A textbook cannot be read as a light novel. So, while the Kindle offers the tantalizing possibility of carrying a stack of textbooks with you everywhere, in fact, you’re better off getting the actual dead tree version if you’re planning on doing some serious studying from it. That is not to say textbook ebooks are not useful; in fact, having a searchable textbook on your laptop is seriously awesome—but this is when you’re using the textbook as a reference material, and not when you’re trying to actually learn the material.

A Zerocoin puzzle

I very rarely post linkspam, but given that I’ve written on the subject of anonymizing Bitcoins in the past, this link seems relevant: Zerocoin: making Bitcoin anonymous. Their essential innovation is to have a continuously operating mixing pool built into the block chain itself; they pull this off using zero-knowledge proofs. Nifty!

Here is a puzzle for the readers of this blog. Suppose that I am a user who wants to anonymize some Bitcoins, and I am willing to wait expected time N before redeeming my Zerocoins. What is the correct probability distribution for me to pick my wait time from? Furthermore, suppose a population of Zerocoin participants, all of which are using this probability distribution. Furthermore, suppose that each participant has some utility function trading off anonymity and expected wait time (feel free to make assumptions that make the analysis easy). Is this population in Nash equilibrium?

A classical logic fairy tale

(Selinger) Here is a fairy tale: The evil king calls the poor shepherd and gives him these orders. “You must bring me the philosophers stone, or you have to find a way to turn the philosopher’s stone to gold. If you don’t, your head will be taken off tomorrow!” What can the poor shepherd do to save his life?

Hat tip to Chris for originally telling me a different variant of this story. Unfortunately, this quote from Lectures on the Curry-Howard Isomorphism was the only reference I could find. What should the shepherd do? Is there something a little odd about this story?

NDSEG

Humbly presented for your consideration: Exhibit A, an NDSEG essay that did not get accepted; Exhibit B, an NDSEG essay that did get accepted. It’s pretty cool what making a statement more focused can do. (See also Philip Guo’s page on the topic.)

Resource limits for Haskell

Last week, I made my very first submission to ICFP! The topic? An old flame of mine: how to bound space usage of Haskell programs.

We describe the first iteration of a resource limits system for Haskell, taking advantage of the key observation that resource limits share semantics and implementation strategy with profiling. We pay special attention to the problem of limiting resident memory usage: we describe a simple implementation technique for carrying out incremental heap censuses and describe a novel information-flow control solution for handling forcible resource reclamation. This system is implemented as a set of patches to GHC.

You can get a copy of the submission here. I've reproduced below the background section on how profiling Haskell works; if this tickles your fancy, check out the rest of the paper!


Profiling in Haskell is performed by charging the costs of computation to the “current cost center.” A cost center is an abstract, programmer-specified entity to which costs can be charged; only one is active per thread at any given time, and the cost semantics determines how the current cost center changes as the program executes. For example, the scc cc e expression (set-cost-center) modifies the current cost center during evaluation of e to be cc. Cost centers are defined statically at compile time.

A cost semantics for Haskell was defined by Sansom et al. (1995) Previously, there had not been a formal account for how to attribute costs in the presence of lazy evaluation and higher-order functions; this paper resolved these questions. The two insights of their paper were the following: first, they articulated that cost attribution should be independent of evaluation order. For the sake of understandability, whether a thunk is evaluated immediately or later should not affect who is charged for it. Secondly, they observed that there are two ways of attributing costs for functions, in direct parallel to the difference between lexical scoping and dynamic scoping.

The principle of order-independent cost-attribution can be seen by this program:

f x = scc "f" (Just (x * x))
g x = let Just y = f x in scc "g" y

When g 4 is invoked, who is charged the cost of evaluating x * x? With strict evaluation, it is easy to see that f should be charged, since x * x is evaluated immediately inside the scc expression. Order-independence dictates, then, that even if the execution of x * x is deferred to the inside of scc "g" y, the cost should still be attributed to f. In general, scc "f" x on a variable x is a no-op. In order to implement such a scheme, the current cost-center at the time of the allocation of the thunk must be recorded with the thunk and restored when the thunk is forced.

The difference between lexical scoping and dynamic scoping for function cost attribution can be seen in this example:

f = scc "f" (\x -> x * x)
g = \x -> scc "g" (x * x)

What is the difference between these two functions? We are in a situation analogous to the choice for thunks: should the current cost-center be saved along with the closure, and restored upon invocation of the function? If the answer is yes, we are using lexical scoping and the functions are equivalent; if the answer is no, we are using dynamic scoping and the scc in f is a no-op. The choice GHC has currently adopted for scc is dynamic scoping.