June 4, 2010Update. The video of the talk can be found here: Galois Tech Talks on Vimeo: Categories are Databases.
On Thursday Dr. David Spivak presented Categories are Databases as a Galois tech talk. His slides are here, and are dramatically more accessible than the paper Simplicial databases. Here is a short attempt to introduce this idea to people who only have a passing knowledge of category theory.
An essential exercise when designing relational databases is the practice of object modeling using labeled graphs of objects and relationships. Visually, this involves drawing a bunch of boxes representing the objects being modeled, and then drawing arrows between the objects showing relationships they may have. We can then use this object model as the basis for a relational database schema.
An example model from a software engineering class is below:

With the image of a object model in your head, consider Wikipedia’s definition of a category:
In mathematics, a category is an algebraic structure consisting of a collection of “objects”, linked together by a collection of “arrows” that have two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object.
The rest of the definition may seem terribly abstract, but hopefully the bolded section seems to clearly correspond to the picture of boxes (objects) and arrows we drew earlier. Perhaps…
Database schema = Category.
Unfortunately, a directed graph is not quite a category; the secret sauce that makes a category a category are those two properties on the arrows, associative composition and identity, and if we really want to strengthen our claim that a schema is a category, we’ll need to demonstrate these.
Recall that our arrows are “relations,” that is, “X occupies Y” or “X is the key for Y”. Our category must have an identity arrow, that is, some relation “X to X”. How about, “X is itself X”, an almost vacuous statement, but one most certainly true. Identity arrow, check.
We also need to show associative composition of arrows. Composition of two arrows is much like they showed you when they were teaching you vector algebra: you take the head of one arrow (X to Y) and smush it with the tail of another (Y to Z), and you get another arrow (X to Z). If a “book has an author” and “an author has a favorite color”, I can say “the book’s author has a favorite color”. This composed statement doesn’t care who the author was… just what his favorite color is. In fact,
Arrow composition = Joins
That is, one of the fundamental features of a category, a feature that any nice result from pure category theory uses as if it were intuitively obvious, is one of the very techniques that does not seem obvious to someone reading about JOINs in the second half of a database tutorial.
(Aside. A foreign key relationship is intrinsically many to one: a foreign key field can only point to one record in another table, but many rows can have that field pointing to the same record. When doing relational modeling, we will frequently use many-to-many or one-to-many relationships. Any database administrator knows, however, that we can simply rewrite these into many to one relationships (reversing the arrow in the case of one-to-many and introducing a new table for many-to-many).)
When we have a schema, we also want to have data to fill the schema. As it turns out, this also fits into a category-theoretic framework too, although a full explanation is out of scope for this post (I suggest consulting the slides.)
Why do you care? There are some good reasons mentioned by Spivak:

I’ll mention one of my own: SQL, while messy, is precise; it can be fed into a computer and turned into a databases that can do real work. On the other hand, relational models are high level but kind of mushy; developers may complain that drawing diagrams with arrows doesn’t seem terribly rigorous and that the formalism doesn’t really help them much.
Category theory is precise; it unambiguously assigns meaning and structure to the relations, the laws of composition define what relations are and aren’t permissible. Category theory is not only about arrows (if it was it’d be pretty boring); rather, it has a rich body of results from many fields expressed in a common language that can “translated” into database-speak. In many cases, important category theory notions are tricky techniques in database administrator folklore. When you talk about arrows, you’re talking a lot more than arrows. That’s powerful!
June 2, 2010Git is very careful about your files: unless you tell it to be explicitly destructive, it will refuse to write over files that it doesn’t know about, instead giving an error like:
Untracked working tree file ‘foobar’ would be overwritten by merge.
In my work with Wizard, I frequently need to perform merges on working copies that have been, well, “less than well maintained”, e.g. they untarred a new version of the the source tree on the old directory and forgot to add the newly added files. When Wizard goes in and tries to automatically upgrade them to the new version the proper way, this results in all sorts of untracked working tree file complaints, and then we have to go and manually check on the untracked files and remove them once they’re fine.
There is a simple workaround for this: while we don’t want to add all untracked files to the Git repository, we could add just the files that would be clobbered. Git will then stop complaining about the files, and we will still have records of them in the history:
def get_file_set(rev):
return set(shell.eval("git", "ls-tree", "-r", "--name-only", rev).split("\n"))
old_files = get_file_set("HEAD")
new_files = get_file_set(self.version)
added_files = new_files - old_files
for f in added_files:
if os.path.lexists(f): # *
shell.call("git", "add", f)
Previously, the starred line of code read if os.path.exists(f). Can you guess why this was buggy? Recall the difference betweeen exists and lexists; if the file in question is a symlink, exists will follow it, while lexists will not. So, if a file to be clobbered is a broken symlink, the old version of the code would not have removed it. In many case, you can’t distinguish between the cases: if the parent directory of the file that the symlink points to exists, I can create a file via the symlink, and other normal “file operations.”
However, Git is very keenly aware of the difference between a symlink and a file and will complain accordingly if it would have clobbered a symlink. Good ole information preservation!
Postscript. Yesterday was my first day of work at Galois! It was so exciting that I couldn’t get my wits together to write a blog post about it. More to come.
May 31, 2010Conservation attention notice. What definitions from the Haskell 98 Prelude tend to get hidden? I informally take a go over the Prelude and mention some candidates.
(.) in the Prelude is function composition, that is, (b -> c) -> (a -> b) -> a -> c. But the denizens of #haskell know it can be much more than that: the function a -> b is really just the functor, so a more general type is Functor f => (b -> c) -> f b -> f c, i.e. fmap. Even more generally, (.) can indicate morphism composition, as it does in Control.Category.
all, and, any, concat, concatMap, elem, foldl, foldl1, foldr, foldr1, mapM_, maximum, minimum, or, product, sequence_. These are all functions that operate on lists, that easily generalize to the Foldable type class; just replace [a] with Foldable t => t a. They can be found in Data.Foldable.
mapM, sequence. These functions generalize to the Traversable type class. They can be found in Data.Traversable.
Any numeric function or type class. Thurston, Thielemann and Johansson wrote numeric-prelude, which dramatically reorganized the hierarchy of numeric classes and generally moved things much closer to their mathematical roots. While dubbed experimental, it’s seen airplay in more mathematics oriented Haskell modules such as Yorgey’s species package.
Any list function. Many data structures look and smell like lists, and support some set of the operations analogous to those on lists. Most modules rely on naming convention, and as a result, list-like constructs like vectors, streams, bytestrings and others ask you to import themselves qualified. However, there is Data.ListLike which attempts to encode similarities between these. Prelude.Listless offers a version of the Prelude minus list functions.
Monad, Functor. It is widely believed that Monad should probably be an instance of Applicative (and the category theorists might also have you insert Pointed functors in the hierarchy too.) The Other Prelude contains this other organization, although it is cumbersome to use in practice since the new class means most existing monad libraries are not usable.
repeat, until. There is an admittedly oddball generalization for these two functions in Control.Monad.HT. In particular, repeat generalizes the identity monad (explicit (un)wrapping necessary), and until generalizes the (->) a monad.
map. It’s fmap for lists.
zip, zipWith, zipWith3, unzip. Conal’s Data.Zip generalize zipping into the Zip type class.
IO. By far you’ll see the most variation here, with a multitude of modules working on many different levels to give extra functionality. (Unfortunately, they’re not really composable…)
- System.IO.Encoding makes the IO functions encoding aware, and uses implicit parameters to allow for a “default encoding.” Relatedly, System.UTF8IO exports functions just for UTF-8.
- System.IO.Jail lets you force input-output to only take place on whitelisted directories and/or handles.
- System.IO.Strict gives strict versions of IO functions, so you don’t have to worry about running out of file handles.
- System.Path.IO, while not quite IO per se, provides typesafe filename manipulation and IO functions to use those types accordingly.
- System.IO.SaferFileHandles allows handles to be used with monadic regions, and parametrizes handles on the IO mode they were opened with. System.IO.ExplicitIOModes just handles IOMode.
May 28, 2010Attention conservation notice. Equivalent Haskell and Python programs are presented for retrieving values from a data structure using state. We then refactor the Haskell program into one that has no state, just a monoid.
A pretty frequent thing a working programmer needs to do is extract some values (frequently more than one) from some data structure, possibly while keeping track of extra metadata. I found myself writing this code the other day:
getPublicNames :: Module -> State (Map Name (Set ModuleName)) ()
getPublicNames (Module _ m _ _ (Just exports) _ _) = mapM_ handleExport exports
where handleExport x = case x of
EVar (UnQual n) -> add n
EAbs (UnQual n) -> add n
EThingAll (UnQual n) -> add n
EThingWith (UnQual n) cs -> add n >> mapM_ handleCName cs
_ -> return ()
handleCName x = case x of
VarName n -> add n
ConName n -> add n
add n = modify (Map.insertWith Set.union n (Set.singleton m))
getPublicNames _ = return ()
Briefly, getPublicNames traverses the Module data structure looking for “public names”, and every time it finds a name, it inserts it records that the current module contained that name. This lets me efficiently ask the question, “How many (and which) modules use FOO name?”
A transcription in Python might look like:
def getPublicNames(module, ret=None):
if not ret:
ret = defaultdict(set)
if module.exports is None:
return ret
for export in module.exports:
if isinstance(export, EVar) or \
isinstance(export, EAbs) or \
isinstance(export, EThingAll):
ret[export.name].add(module.name)
elif isinstance(export, EThingWith):
ret[export.name].add(module.name)
for cname in export.cnames:
ret[export.name].add(cname.name)
return ret
There a number of cosmetic differences between these two versions:
- The Python version takes in pre-existing state optionally; otherwise it does the initialization and is referentially transparent. The Haskell version has no such notion of default state; we trust that the user can run the state monad with a simple
runState. - The Python version takes advantage of duck-typing to reduce code; I’ve also played fast and loose with the hypothetical object-oriented equivalent data structure.
- The Python version doesn’t have it’s code separated into
handleExport and handleCname, although we certainly could have with a few more inline functions.
But other than that, they pretty much read and operate precisely the same way, by mutating state. The Python version is also pretty much the end of the road; besides pushing the functions into their member objects, I believe there is no more “Pythonic” way to do it. The Haskell version is making me itchy though…
We’re never reading out state! This is a tell-tale sign that we should be using a Writer monad, not a State monad. There is a slight technical difficulty, though: Writer requires that the value being “logged” is a Monoid, and while, in theory, Map k (Set a) certainly has a a Monoid instance that does what we mean, the general Monoid instance for Map k v doesn’t cut it. Recall that a monoid describes data that I can “append” together to form another version of that data. For a SetMap,
- We want a monoid instance that takes two
SetMap structures and and unions the map, resolving duplicate by unioning those sets. - By default, we get a monoid instance that takes two
Map structures and unions the map, preferring the original value when a conflict occurs and discarding the rest.
Newtype to the rescue. A newtype is in order. We’ll call it SetMap. The recipe to follow for cooking up the newtype is as follows:
First, you need a newtype declaration. Explicitly naming the field in record syntax as unDataType is idiomatic, and invokes “unwrapping” the newtype wrapper from the object:
newtype SetMap k v = SetMap { unSetMap :: Map k (Set v) }
Next, you write the special type class instances you are interested in. (And possibly use deriving ... to pull in any old, default instances that are still good).
instance (Ord k, Ord v) => Monoid (SetMap k v) where
mempty = SetMap Map.empty
mappend (SetMap a) (SetMap b) = SetMap $ Map.unionWith Set.union a b
mconcat = SetMap . Map.unionsWith Set.union . map unSetMap
Perhaps some helper functions are in order:
setMapSingleton :: (Ord k, Ord v) => k -> v -> SetMap k v
setMapSingleton k v = SetMap $ Map.singleton k (Set.singleton v)
And voila!
getPublicNames :: Module -> Writer (SetMap Name ModuleName) ()
getPublicNames (Module _ m _ _ (Just exports) _ _) = mapM_ handleExport exports
where handleExport x = case x of
EVar (UnQual n) -> add n
EAbs (UnQual n) -> add n
EThingAll (UnQual n) -> add n
EThingWith (UnQual n) cs -> add n >> mapM_ handleCName cs
_ -> return ()
handleCName x = case x of
VarName n -> add n
ConName n -> add n
add n = tell (setMapSingleton n m) -- *
getPublicNames _ = return ()
Wait, we made our code more specific, and somehow it got longer! Perhaps, gentle reader, you might be slightly reassured by the fact that the new SetMap support code, which forms the bulk of what we wrote, is highly general and reusable, and, excluding that code, we’ve slightly reduced the code from add n = modify (Map.insertWith Set.union n (Set.singleton m)) to add n = tell (setMapSingleton n m).
Perhaps more importantly, we’ve now indicated to an enduser a new contract for this function: we will only ever write values out, and not change them.
Why were we using the monad again? Closer inspection further reveals that we’re never using bind (>>=). In fact, we’re not really using any of the power of a monad. Let’s make our code even more specific:
-- This operator is going into base soon, I swear!
(<>) = mappend
getPublicNames :: Module -> SetMap Name ModuleName
getPublicNames (Module _ m _ _ (Just exports) _ _) = foldMap handleExport exports
where handleExport x = case x of
EVar (UnQual n) -> make n
EAbs (UnQual n) -> make n
EThingAll (UnQual n) -> make n
EThingWith (UnQual n) cs -> make n <> foldMap handleCName cs
_ -> mempty
handleCName x = case x of
VarName n -> make n
ConName n -> make n
make n = setMapSingleton n m
getPublicNames _ = mempty
There’s not much of a space change, but users of this function now no longer need to execWriter; they can use the output right off the back (although they might need to unpack it eventually with unSetMap.
Technically, we never needed the monoid. In particular, setMapSingleton is forcing our code to cater to SetMap, and not Monoids in general (that wouldn’t really make any sense, either. Perhaps the notion of a “Pointed” Monoid would be useful). So we could have just written out all of our functions explicitly; more likely, we could have defined another set of helper functions to keep code size down. But you should still use the monoid. Monoids act certain ways (e.g. the monoid laws) and have a canonical set of functions that operate on them. By using those functions, you allow other people who have worked with monoids to quickly reason about your code, even if they’re not familiar with your specific monoid.
Postscript. I refactored real code while writing this blog post; none of the examples were contrived. I was originally planning on writing about “You ain’t gonna need it” and Haskell abstractions, but fleshing out this example ended up being a bit longer than I expected. Maybe next time…
Post-postscript. Anders Kaseorg writes in to mention that SetMap has been implemented in several places (Criterion.MultiMap, Holumbus.Data.MultiMap), but it hasn’t been put in a particularly general library.
May 26, 2010One of the papers I’ve been slowly rereading since summer began is “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire”, by Erik Meijer, Maarten Fokkinga and Ross Paterson. If you want to know what {cata,ana,hylo,para}morphisms are, this is the paper to read: section 2 gives a highly readable formulation of these morphisms for the beloved linked list.
Last time, however, my eyes got a little bit glassy when they started discussing algebraic data types, despite having used and defined them in Haskell; part of me felt inundated in a sea of triangles, circles and squiggles, and by the time they reached the laws for the basic combinators, I might as well have said, “It’s all math to me!”
A closer reading revealed that, actually, all of these algebraic operators can be written out in plain Haskell, and for someone who has been working with Haskell for a little bit of time, this can provide a smoother (albeit more verbose) reading. Thus, I present this translation guide.
Type operators. By convention, types are $A, B, C\ldots$ on the left and a, b, c... on the right. We distinguish these from function operators, though the paper does not and relies on convention to distinguish between the two.
$A \dagger B \Leftrightarrow$ Bifunctor t => t a b
$A_F \Leftrightarrow$ Functor f => f a
$A* \Leftrightarrow$ [a]
$D \parallel D' \Leftrightarrow$ (d, d')
$D\ |\ D' \Leftrightarrow$ Either d d'
$_I \Leftrightarrow$ Identity
$\underline{D} \Leftrightarrow$ Const d
$A_{(FG)} \Leftrightarrow$ (Functor f, Functor g) => g (f a)
$A_{(F\dagger G)} \Leftrightarrow$ (Bifunctor t, Functor f, Functor g) => Lift t f g a
$\boldsymbol{1} \Leftrightarrow$ ()
(For the pedantic, you need to add Hask Hask Hask to the end of all the Bifunctors.)
Function operators. By convention, functions are $f, g, h\ldots$ on the left and f :: a -> b, g :: a' -> b', h... on the right (with types unified as appropriate).
$f \dagger g \Leftrightarrow$ bimap f g :: Bifunctor t => t a a' -> t b b'
$f_F \Leftrightarrow$ fmap f :: Functor f => f a -> f b
$f \parallel g \Leftrightarrow$ f *** g :: (a, a') -> (b, b')
where f *** g = \(x, x') -> (f x, g x')
$\grave{\pi} \Leftrightarrow$ fst :: (a, b) -> a
$\acute{\pi} \Leftrightarrow$ snd :: (a, b) -> b
$f \vartriangle g \Leftrightarrow$ f &&& g :: a -> (b, b') -- a = a'
where f &&& g = \x -> (f x, g x)
$\Delta x \Leftrightarrow$ double :: a -> (a, a)
where double x = (x, x)
$f\ |\ g \Leftrightarrow$ asum f g :: Either a a' -> Either b b'
where asum f g (Left x) = Left (f x)
asum f g (Right y) = Right (g y)
$\grave{\i} \Leftrightarrow$ Left :: a -> Either a b
$\acute{\i} \Leftrightarrow$ Right :: b -> Either a b
$f\ \triangledown\ g \Leftrightarrow$ either f g :: Either a a' -> b -- b = b'
$\nabla x \Leftrightarrow$ extract x :: a
where extract (Left x) = x
extract (Right x) = x
$f \rightarrow g \Leftrightarrow$ (f --> g) h = g . h . f
(-->) :: (a' -> a) -> (b -> b') -> (a -> b) -> a' -> b'
$g \leftarrow f \Leftrightarrow$ (g <-- f) h = g . h . f
(<--) :: (b -> b') -> (a' -> a) -> (a -> b) -> a' -> b'
$(f \overset{F}{\leftarrow} g) \Leftrightarrow$ (g <-*- f) h = g . fmap h . f
(<-*-) :: Functor f => (f b -> b') -> (a' -> f a) -> (a -> b) -> a' -> b'
$f_I \Leftrightarrow$ id f :: a -> b
$f\underline{D} \Leftrightarrow$ const id f :: a -> a
$x_{(FG)} \Leftrightarrow$ (fmap . fmap) x
$VOID \Leftrightarrow$ const ()
$\mu f \Leftrightarrow$ fix f
Now, let’s look at the abides law:
$(f \vartriangle g)\ \triangledown\ (h \vartriangle j) = (f\ \triangledown\ h) \vartriangle (g\ \triangledown\ j)$
Translated into Haskell, this states:
either (f &&& g) (h &&& j) = (either f h) &&& (either g j)
Which (to me at least) makes more sense: if I want to extract a value from Either, and then run two functions on it and return the tuple of results, I can also split the value into a tuple immediately, and extract from the either “twice” with different functions. (Try running the function manually on a Left x and Right y.)
May 24, 2010Consider the following piece of code:
import Prelude hiding (catch)
import Control.Exception
main :: IO ()
main = do
t <- safeCall
unsafeCall t
putStrLn "Done."
safeCall :: IO String
safeCall = do
return alwaysFails `catch` errorHandler
--alwaysFails = throw (ErrorCall "Oh no!")
alwaysFails = error "Oh no!"
errorHandler :: SomeException -> IO String
errorHandler e = do
putStrLn "Caught"
return "Ok."
errorHandler_ e = errorHandler e >> return ()
unsafeCall :: String -> IO ()
unsafeCall = putStrLn
What might you expect the output to be? A straightforward transcription to Python might look like:
def main():
t = safeCall()
unsafeCall(t)
print "Done"
def safeCall():
try:
return alwaysFails()
except:
return errorHandler()
def alwaysFails():
raise Exception("Oh no!")
def errorHandler():
print "Caught."
return "Ok."
def unsafeCall(output):
print output
and anyone with a passing familiarity with the any strict language will say, “Of course, it will output:”
Caught.
Ok.
Done.
Of course, lazy exceptions (which is what error emits) aren’t called lazy for no reason; the Haskell code outputs:
*** Exception: Oh no!
What happened? Haskell was lazy, and didn’t bother evaluating the pure insides of the IO return alwaysFails until it needed it for unsafeCall, at which point there was no more catch call guarding the code. If you don’t believe me, you can add a trace around alwaysFails. You can also try installing errorHandler_ on unsafeCall.
What is the moral of the story? Well, one is that error is evil, but we already knew that…
- You may install exception handlers for most IO-based errors the obvious way. (If we had replaced
return alwaysFails with alwaysFails, the result would have been the strict one.) You may not install exception handlers for errors originating from pure code, since GHC reserves the right to schedule arbitrarily the time when your code is executed. - If pure code is emitting exceptions and you would like it to stop doing that, you’ll probably need to force strictness with
$! deepseq or rnf, which will force GHC to perform the computation inside your guarded area. As my readers point out, a good way to think about this is that the call is not what is exceptional, the structure is. - If you are getting an imprecise exception from pure code, but can’t figure out where, good luck! I don’t have a good recipe for figuring this out yet. (Nudge to my blog readers.)
Postscript. Note that we needed to use Control.Exception.catch. Prelude.catch, as per Haskell98, only catches IO-based errors.
May 21, 2010Now that term is over, I finally went an upgraded my laptop to Ubuntu 10.04 LTS, Lucid Lynx. The process went substantially more smoothly than Karmic went, but there were still a few hiccups.
Etckeeper. As always, you should set AVOID_COMMIT_BEFORE_INSTALL to 0 before attempting a release upgrade, since etckeeper hooks will be invoked multiple times and there’s nothing more annoying than getting the notice “etckeeper aborted the install due to uncommited changes, please commit them yourselves” because there is no way that’s going to work.
Well, this time round there was a different, hilarious bug:
/etc/etckeeper/post-install.d/50vcs-commit: 20:
etckeeper: Argument list too long
This has been reported as Bug #574244. Despite being an ominous warning, it is actually quite harmless, and you can complete the upgrade with:
aptitude update
aptitude full-upgrade
I had to kick the network due to broken DNS; your mileage may vary.
Wireless key management. I have not resolved this issue yet, but the basic symptom is that Ubuntu network-manager fails to remember WEP keys you have provided for secured networks. (I know you MIT students still on campus aren’t too bothered by this.) This appears to be a moderately widespread problems, as you have people revivifying permutations of this bug that occurred a long time ago. (In classic terrible bug reporting style, users are attaching themselves to old bug reports when they really should be filing a new regression for Lucid.)
From what I’ve investigated, I have been able to verify that connections to the keyring daemon are not working. There is a fix circulating in which you change your startup command from “gnome-keyring-daemon –start –components=pkcs11” to just “gnome-keyring-daemon”, although I suspect this isn’t really the “right” thing, and it doesn’t work for me anyway.
PHP. Ubuntu Lucid most notably has upgraded PHP 5.3.2, but they’ve also fiddled with some of the default settings. In my case, log_errors was causing quite interesting behavior for my scripts, and I have since coded my scripts to explicitly turn this ini setting off. You should save a copy of the output of php -i prior to the upgrade and compare it with the output afterwards.
May 19, 2010Last February, I posted about classes that I was going to be taking. Here are some reflections, now that final projects and examinations are over.
6.005: Software Construction. Teaching students how to engineer large software projects is one of the oddest paradoxes that you might encounter in academic life. The institute is certainly capable of teaching concepts, tools and methodologies, but, to actually be capable of constructing a system from scratch? It’s not really something you can learn, it something that you have to do. And the amount of work you have to put in to start getting the taste of real code as opposed to school code (which gets thrown away at the end of the term) doesn’t fit into one term. We’ve joked that MIT ought to have a two part series, where the second part you are asked to go modify some code you wrote a year ago in the face of shifting requirements. (Unfortunately, I suspect a large number of people will rewrite the thing: one of the problems of not actually being able to do large systems.)
When you get past this fundamental issue, the class was relevant and even a smidge enjoyable. While I didn’t personally get much out of it, I was delighted to see the class trace course across all of the big tricky ideas that I encountered when I was cutting my teeth as a programmer: roughly, the course can be divided into state machines, functional ideas and relational modeling. Despite what others say, I find these formalisms useful, and the key ways I helped develop intuition for what a traditional imperative program should smell like. Unfortunately, each of these are really big ideas, and the course doesn’t manage to do it justice.
6.02: Intro to EECS II. MIT seems to like threes: for 6.02 the big three was signals, encodings and networks. The class was a pleasant walk through the three subjects, even though, while the class professes to be “introductory”, none of the topics are what I’m really interested these days in computer science.
One of the notable rough patches I hit taking the course was when the class hit frequency analysis. I’m a big believer in understanding the underlying principles behind complex systems: it’s one of the reasons why the calculus driven mechanical physics class worked so much better for me. Here, this predisposition was counterproductive: as Robert put it (and I paraphrase), yes, you could do it that way, but it is messy and not particularly insightful.
6.045: Automata, Computing and Complexity. So much fun! Scott Aaronson is a charming lecturer and after dealing with the bread and butter of automata and complexity (which the course teaches well; as one math major taking the class put it, “I can actually understand these lectures!”) it veers off into the magical worlds of cryptography, probably approximately correct learning and quantum computing (three out of ten questions on the comprehensive final, to be precise). By the end, you will know how to conduct a Diffie-Helman key exchange, why babies might be able to break RSA, and when to apply a Hadamard gate to a qubit! Unfortunately the graders weren’t exactly “quick” about grading problem sets, but in my opinion 6.045’s troubles were solely administrative.
6.945: Large-scale Symbolic Systems. The subject-matter of the class is well worth having in the toolkit of any enterprising programmer: combinators, pattern matching and generic dispatch all our powerful tools with wide applicability in many systems. You also learn how to use continuations (gee!)
Sussman as a lecturer is an interesting phenomenon, especially when you reach the ending lectures when they are essentially talking about ideas that they cooked up last night. It’s rare that electrical engineering and highly symbolic programming come together, but that’s precisely the problem Sussman knows best: he knows how to solve circuit engineering and he wants to figure out the implementation details of an essentially artificially intelligent system that has this knowledge too. Unfortunately, if you’re not completely versed in this analysis the analogies being made may be difficult to understand, and this was a primary blocking point later on in the term. Feedback was late to nonexistent; take this course if you don’t need too much motivation to learn about these things.
21M.283: Musicals of Stage and Screen. I watched lots of musicals. It was great.
May 17, 2010It is a truth universally acknowledged that if you really want a change put into an open source project, you submit a patch along with the bug report. Sure, you might complain that the average user doesn’t have any programming experience and that it’s unreasonable to expect them to learn some complex system and then figure out how to make the change they’re looking for, but you’re just not in the secret society of hacker enthusiasts who fix their own damn software and give back to the projects they use.
I hate patches. I feel morally obligated to review and patch them in, and it usually ends up taking more than just “a few cycles.”
Not all patches are created equal. I group them into this hierarchy:
- Cosmetically deficient. The developer doesn’t know anything about submitting patches; perhaps they haven’t even discovered version control yet. They’re apt to sending the entire modified file along with their changes. Those who do use
diff -u don’t bother looking at the output of the patch; they submit patches that swap spaces with tabs, random whitespace changes and gratuitous cosmetic changes. Many developers simply reject these patches. - Semantically deficient. For some developers, the act of crafting a patch is taking a source file, trying a vaguely plausible change, seeing if the change had the desired effect, and if not, try something else. In the degenerate case, the patch is nonsense, and in no way correct. More frequently, the submitted patch fails to account for common edge-cases in the application, appropriate error handling or interactions with other parts of the system. Many developers will reply nicely to patches like this and ask for a hunk to be done another way.
- Engineering deficient. The patch is well-written, it looks good and does the right things. But… they didn’t add tests to test the new changes, they didn’t fix old unit tests changed by the functionality difference and they didn’t add documentation in the appropriate places for the fix. Many developers will buckle down and make the engineering extras for the patch. Some developers don’t have such tests (cough Linux kernel cough). Even more rarely, some projects can afford to make the patch submitter add the tests; usually this only occurs in projects that are aimed towards a fairly literate programming end-user community.
The Git mailing list can and does expect excellent patch submissions from its community; it’s a version control system, that’s the point! A library written in PHP used primarily by developers who have never written a unit test or submitted a unified diff for upstream review has much less flexibility. Most patches I receive for HTML Purifier never make it past the cosmetic deficiencies. And worse, the developers simply don’t have the time to interactively improve the patch to the end: if I reply with a patch review, they never manage to get their patch to the point where it’s acceptable for submission without excessive tooth pulling. But I feel guilty that my software is wrong, and so when I get the patch, I go and clean it up, incorporate it in, rewrite half of it, add the tests and then ship the change.
So, in the end, the software is improved by the submission by the patch, even if it didn’t save the maintainer any time. So yeah, I hate patches. Maybe I should stop being grumpy and go back to improving my open source projects.
May 14, 2010Here at the eve of Spring 2010 term, I decided to run this little experiment on my laptop: what files had I modified within the last six months?
find . \( -path '*/.*' \) -prune -o -mtime -180 -print
The result was north of one-hundred-fifty thousand modified files. Here’s the (slightly) abridged version:
- LaTeX files for “Adventures in Three Monads”, an article that ran in the Monad Reader. Also, blackboard diagrams for my Advanced Typeclasses class I gave that IAP; I ended up not being able to get to the material I prepared for the Reader.
valk.txt, which contained notes for my Valkyrie Nethack character. I made my first ascension on March {24,25}th.- An Eclipse Java project that served as the jumpboard for my HAMT experiments, under the purview of my undergraduate research project.
htmlpurifier-web and htmlpurifier, courtesy of the HTML Purifier 4.1 release I pushed within the last month. (I suspect there will be another release coming soon too.) This also meant new builds of PHP 5.2.11, 5.2.12, 5.2.13, 5.3.1 and 5.3.2 for my super-amazing PHP multi-version farm. Note to self, next time, exclude the build directories from your automated backups, kthxbai.- A
qemu checkout, in which I attempted to fix their broken DHCP code when the same MAC address requests two different IP addresses, gave up, and assigned static addresses to the virtual machines we were using to demo live process migration. Mmm… 6.828 final project. - A
hackage-server and holumbus checkout, sprung from aborted dreams of making Holombus and Hackage cooperate to have up-to-the-minute index of all Haskell functions. I hear the Holumbus team has been making changes to make Hayoo be able to incrementally update its index. - Updates to tidy up
extort, a membership dues tracking application written in Haskell, due to the recent change of leadership in the Assassins’ Guild. During the replacement election, one of the suggested candidate questions was “Do you know Haskell.” We’ll see how long the program lasts… - An
abc source directory, in which I flexed my C source-diving skills and searched for information on how to use the library. I may be working closely with it in my internship at Galois. Curiously enough, this roughly coincided with the SAT solver that was to be written for 6.005, as well as the study of SAT in my computation complexity class 6.045. - A
mit-scheme checkout, in order to analyze their red-black tree implementation to figure out how easily it could be persisted (the answer was no, and I had to write my own implementation off of Okasaki’s notes), and to figure out why --batch-mode didn’t do what it said on the tin. - A
log4j source tree, which I used for two of my Software Engineering 6.005 projects. It was mostly painless to use, and I highly recommend it if you’re building software in Java. - Lots of test directories for
wizard (note to self, backing those up is also a bad idea!) Some day, I’ll unleash this software on the world, but for now, it’s usage is growing within the MIT sphere.
The really abridged version:
- Languages of the half-year: Java, Haskell and Scheme
- Biggest directory: I didn’t count rigorously, but
linux, wizard and hackage were pretty big. - Best filename:
./Dev/exploits/jessica_biel_naked_in_my_bed.c
It’s been a long random walk through lots of subjects, software and self-research. There is some trade-off from swapping subjects to really focus in after a month: on the one hand a month is really enough time to do anything really in the space (I feel much the same about my blog posts; they’re an excuse to do the little experiments and segues, but nothing big), on the other hand it means I keep getting views of many specific subfields of computer science. With summer coming up soon, I will probably find another ambitious project to drive through my free time (or perhaps give some of my existing projects some of the love they need).