The imperative. When should you create a custom data type, as opposed to reusing pre-existing data types such as Either, Maybe or tuples? Here are some reasons you should reuse a generic type:
- It saves typing (both in declaration and in pattern matching), making it good for one-off affairs,
- It gives you a library of predefined functions that work with that type,
- Other developers have expectations about what the type does that make understanding quicker.
On the flip side of the coin:
Read more...
The essence of dataflow optimization is analysis and transformation, and it should come as no surprise that once you’ve defined your intermediate representation, the majority of your work with Hoopl will involve defining analysis and transformations on your graph of basic blocks. Analysis itself can be further divided into the specification of the dataflow facts that we are computing, and how we derive these dataflow facts during analysis. In part 2 of this series on Hoopl, we look at the fundamental structure backing analysis: the dataflow lattice. We discuss the theoretical reasons behind using a lattice and give examples of lattices you may define for optimizations such as constant propagation and liveness analysis.
Read more...
Hoopl is a higher-order optimization library. We think it’s pretty cool! This series of blog post is meant to give a tutorial-like introduction to this library, supplementing the papers and the source code. I hope this series will also have something for people who aren’t interested in writing optimization passes with Hoopl, but are interested in the design of higher-order APIs in Haskell. By the end of this tutorial, you will be able to understand references in code to names such as analyzeAndRewriteFwd and DataflowLattice, and make decode such type signatures as:
Read more...
From cvs-ghc@haskell.org:
Hi all,
We now plan to do the git switchover this Thursday, 31 March.
Thanks
Ian
There are some things that I will miss from Darcs (darcs send and the cases where “everything is a patch” actually does work well), but all and all I’m quite pleased to see GHC moving to Git.
The release cycle gets longer and longer… probably to the delight of all those downstream, anyway.
HTML Purifier 4.3.0 is a major security release addressing various security vulnerabilities related to user-submitted code and legitimate client-side scripts. It also contains an accumulation of new features and bugfixes over half a year. New configuration options include %CSS.Trusted, %CSS.AllowedFonts and %Cache.SerializerPermissions. There is a backwards-incompatible API change for customized raw definitions, see the customization documentation for details.
Read more...
Yesterday I cycled from Cambridge to Ely, and back again. The route is a glorious 38 miles (round trip) of British towns and countryside. The cycling around Cambridge is quite good, because there aren’t very many hills, and in the farmland areas you get to see the tractors rolling by. The longest I’d ever cycled before was the Springwater Corridor in Portland, the segment of which I did was only about 10 miles.
Read more...
Ghost in the state machine
A long time ago (circa 2007-2008), I wrote perhaps the single most complicated piece of code in HTML Purifier—one of those real monsters that you don’t think anyone else could ever understand and that you are really grateful you have a comprehensive test suite for. The idea was this: I had a state machine that modified a stream of tokens (since this was a stream of HTML tags and text, the state machine maintained information such as the current nesting stack), and I wanted to allow users to add extra functionality on top of this stream processor (the very first processor inserted paragraph tags when double-newlines were encountered) in a modular way.
Read more...
I recently attended a talk which discussed extending proof assistants with diagrammatic reasoning support , helping to break the hegemony of symbolic systems that is predominant in this field. While the work is certainly novel in some respects, I can’t also but help think that we’ve come back full circle to the Ancient Greeks, who were big fans of geometry, and its correspondingly visual form of reasoning. The thought came up again while I was reading a mathematics text and marveling at the multiple methods of presenting a single concept. In this essay, I’d like to look at this return to older, more “intuitive” forms of reasoning: I’ve called it “Hellenistic reasoning” because geometry and the Socratic method nicely sum up visual and interactive reasoning that I’d like to discuss. I argue that this resurgence is a good thing, and that though these forms of reasoning may not be as powerful or general as symbolic reasoning, they will be critical to the application and communication of abstract mathematical results.
Read more...
Books are expensive, but by the power of higher-education (also expensive, but differently so), vast oceans of books are available to an enterprising compsci. Here’s my reading list for the spring break lending period (many of which were recommended on #haskell:

- Concepts, Techniques, and Models of Computer Programming by Peter Van Roy and Seif Haridi. Wonderfully iconoclastic book, and probably one of the easier reads on the list.
- Types and Programming Languages by Benjamin Pierce. I’ve been working on this one for a while; this break I’m focusing on the proof strategies for preservation, progress and safety, and also using it to complement a self-study course summed up by the next book.
- Lectures on the Curry-Howard Isomorphism by M.H. Sørensen and P. Urzycyzn. Very good, I’ve skimmed the first three chapters and I’m working on the exercises in chapter 2. I’ve been prone to making silly mis-assertions about the Curry-Howard Isomorphism (or is it?), so I’m looking forward to more firmly grounding my understanding of this correspondence. The sections on intuitionistic logic has already been very enlightening.
- Type Theory and Functional Programming by Simon Thompson. Haven’t looked at it yet, but fits into the general course of the previous two books.
- Purely Functional Data Structures by Chris Okasaki. Also one I’ve been working on a while. Working on compressing all the information mentally.
- Basic Category Theory for Computer Scientists by Benjamin Pierce. I’ve got two items on category theory; I got this one on a whim. Haven’t looked at it yet.
- Pearls of Functional Algorithm Design by Richard Bird. Something like a collection of puzzles. I think I will enjoy reading through them and working out the subtleties. I probably won’t get to the information compression stage this time around.
- Category Theory by Steve Awodey. I was working on the exercises in this textbook, and think I might get past the first chapter.
Wherein I make fun of functional programming advocates.
In this essay, I’d like to discuss the ideologies of “imperative programming” and “functional programming” in terms of the language features they lean on: in particular, the mechanisms by which they allow developers to express themselves in less code. I propose that the set of features that make up imperative programming constitute a dominant programming monoculture that is partially incompatible with functional programming’s favored features, requiring functional programming advocates to do funny things to gain the attention of the programmers.
Read more...