ezyang’s blog

the arc of software bends towards understanding

Nested Data Parallelism versus Creative Catamorphisms

I got to watch (unfortunately not in person) Simon Peyton Jones' excellent talk (no really, if you haven't seen it, you should carve out the hour necessary to watch it) on Data Parallel Haskell (slides). The talk got me thinking about a previous talk about parallelism given by Guy Steele I had seen recently.

What's the relationship between these two talks? At first I though, "Man, Guy Steele must be advocating a discipline for programmers, while Simon Peyton Jones' is advocating a discipline for compilers." But this didn't really seem to fit right: maybe you have a clever catamorphism for the problem, the overhead for fully parallelizing everything is prohibitive. As Steele notes, we need "hybrid sequential/parallel strategies," the most simple of which is "parallelize it until it's manageable and run the fast sequential algorithm on it," ala flat data parallelism. Nor is nested data parallelism a silver bullet; while it has wider applicability, there are still domains it fits poorly on.

I believe that Nested Data Parallelism will be a powerful and practical (well, at least once the Data Parallel Haskell team works out the kinks) tool in the quest for efficiently implementing catamorphic programs. In particular, it takes the huge win of chunking that characterized flat data parallel programs, and combines it with the powerful abstraction of nested parallel data. It promises to eliminate the drudgery of splitting a parallel data structure into even chunks to pass off to the separate processors. It does not resolve issues such as what to do when the input data doesn't come in a parallel structure (you might notice that Data Parallel Haskell is primarily useful on numeric types: doubles, integers and words) and it still relies on the existence of a convenient reductive function for the parallel structure you've chosen.