### Abstraction without a concrete concept

#### by Edward Z. Yang

Hoare logic, despite its mathematical sounding name, is actually a quite practical way of reasoning about programs that most software engineers subconsciously employ in the form of preconditions and postconditions. It explicitly axiomatizes things that are common sense to a programmer: for example, a NOP should not change any conditions, or if a line of code has a postcondition that another line of code has as its precondition, those lines of code can be executed one after another and the inner precondition-postcondition pair ignored. Even if you never actually write out the derivation chains, you’re informally applying Hoare logic when you are trying to review code that uses preconditions and postconditions. Hoare logic is an abstraction that lets us rigorously talk about any imperative language with the same set of rules.

At my very first Semantics lunch, I was treated to a talk by Tony Hoare entitled Abstract Separation Algebra. Unhappy with the fact that the separation logic requires you to talk about three things: the precondition, the actual code, and the postcondition, Tony Hoare re-encoded the scheme as vanilla abstract algebraic structures. While his slides all seemed plausible my abstract algebra wasn’t facile enough to be able to verify all of his results as they flashed by. Accordingly, I’m not going to talk too much about the actual reencoding (I need to work it out myself when I get some free time, ha!). However, I caught a whiff of the underlying thought process that was driving this reencoding, which I thought was a fascinating way of thinking: take an axiom X of your abstraction that you would like to be true, and with a more fundamental pool of axioms (in this case, the usual axioms of abstract algebra) figure out which ones are necessary to make axiom X logically true. This is mind twisting in several ways, not the least of which is that you end up with a set of primitives for which you may not even have a concrete model for!

I will admit, I am a little afraid that the mathematicians in the audience will think, “Well, of course that’s how you pick your set of axioms! And pff, concrete models are for the weak.” (After all, it was said in the talk, “This is abstract algebra, so you should be willing to accept these axioms without having any model in mind.”) But given the questions asked during the talk, among them “so what intuitively does ★ mean?” (for which there was no answer) I think I am justified in claiming that for some of us mortals, this is a rather strange way of thinking about things.

For example, consider the first example that Hoare gave: let (C, ;, ⊑) be a ordered semigroup (that is, a carrier set C equipped an associative binary operation ; that is monotonic with respect to a partial ordering relation ⊑.) Then, we can define the Hoare triple `{p} q {r}` as the relation p ; q ⊑ r. I immediately want to know: what does C represent (it appears to contain postconditions, lines of code, and preconditions: as Simon Peyton Jones pointed out, it doesn’t seem to distinguish these two types of code) and what does the relation ⊑ represent? But these are rather unimportant questions, since if we shake this algebra-tree, the fruit of sequential composition drops down by monotonicity:

Alas, the humble ordered group isn’t enough to encode all of the things we need, so Hoare ends up bringing monoidal lattices and special operators that obey the abides law in order to get the “Separation Algebra”:

As he admits, he is cheating slightly: he is bringing in precisely the algebraic properties he needs in order to get the properties he wants. But what makes the result notable is the fact that *all he needed* were the well-studied structures of vanilla abstract algebra. Consequently, when we manipulate our newly encoded Hoare triples, we can rely on the good old techniques of abstract algebra (which, for those of us who never took abstract algebra, may not be “good” or “old.”)

So what do ⊑, ★, ε and ⊔ mean? Because of how we’ve constructed our algebraic structure, we don’t necessarily know! (Although, I’m sure some concrete model is out there just waiting to be discovered. Maybe it’s been discovered already.) Perhaps this is all just an illustration of the category theory mantra: “The self is unimportant: only its relation to others.” A bit zen, if you ask me.

*Postscript.* There was a nice comment about how the duality between the implementation and specification algebras resembled the Stern dualities: very roughly, that two points determine a line, but a line determines an infinite number of points. I went and searched for the term but couldn’t find any literature about this, so perhaps some pointers would be appreciated.

Did you enjoy this post? Please subscribe to my feed!

A famous example of starting with axioms then working out models is modal logic and its Kripke semantics…

Do you know if the slides for this talk will be available?

Not sure: I should poke Tony about it.

“it appears to contain postconditions, lines of code, and preconditions: as Simon Peyton Jones pointed out, it doesn’t seem to distinguish these two types of code”

I recall an example where conditions and commands are not distinguished — “Kleene algebra with tests” = KAT.