ezyang’s blog

the arc of software bends towards understanding

How to maintain a pristine copy of your configuration files

etckeeper is a pretty good tool for keeping your /etc under version control, but one thing that it won’t tell you is what the diff between your configuration and a pristine version of your configuration (if you installed the same packages on the system, but didn’t change any configuration). People have wanted this, but I couldn’t find anything that actually did this. A month ago, I figured out a nice, easy way to achieve this under etckeeper with a Git repository. The idea is to maintain a pristine branch, and when an upgrade occurs, automatically apply the patch (automatically generated) to a pristine branch. This procedure works best on a fresh install, since I don’t have a good way of reconstructing history if you haven’t been tracking the pristine from the start.

Here’s how it goes:

  1. Install etckeeper. It is best if you are using etckeeper 1.10 or later, but if not, you should replace 30store-metadata with a copy from the latest version. This is important, because pre-1.10, the metadata store included files that were ignored, which means you’ll get lots of spurious conflicts.

  2. Initialize the Git repository using etckeeper init and make an initial commit git commit.

  3. Create a pristine branch: git branch pristine (but stay on the master branch)

  4. Modify the etckeeper configuration so that VCS="git", AVOID_DAILY_AUTOCOMMITS=1 and AVOID_COMMIT_BEFORE_INSTALL=1:

    diff --git a/etckeeper/etckeeper.conf b/etckeeper/etckeeper.conf
    index aedf20b..99b4e43 100644
    --- a/etckeeper/etckeeper.conf
    +++ b/etckeeper/etckeeper.conf
    @@ -1,7 +1,7 @@
     # The VCS to use.
     #VCS="hg"
    -#VCS="git"
    -VCS="bzr"
    +VCS="git"
    +#VCS="bzr"
     #VCS="darcs"
    
     # Options passed to git commit when run by etckeeper.
    @@ -18,7 +18,7 @@ DARCS_COMMIT_OPTIONS="-a"
    
     # Uncomment to avoid etckeeper committing existing changes
     # to /etc automatically once per day.
    -#AVOID_DAILY_AUTOCOMMITS=1
    +AVOID_DAILY_AUTOCOMMITS=1
    
     # Uncomment the following to avoid special file warning
     # (the option is enabled automatically by cronjob regardless).
    @@ -27,7 +27,7 @@ DARCS_COMMIT_OPTIONS="-a"
     # Uncomment to avoid etckeeper committing existing changes to
     # /etc before installation. It will cancel the installation,
     # so you can commit the changes by hand.
    -#AVOID_COMMIT_BEFORE_INSTALL=1
    +AVOID_COMMIT_BEFORE_INSTALL=1
    
     # The high-level package manager that's being used.
     # (apt, pacman-g2, yum, zypper etc)
    
  5. Apply this patch to etckeeper/commit.d/50vcs-commit. This patch is responsible for keeping the pristine branch up-to-date (more explanation below).

  6. Create a .gitattributes file with contents .etckeeper merge=union. This makes merges on the metadata file use the union strategy, which reduces spurious conflicts dramatically:

    diff --git a/.gitattributes b/.gitattributes
    new file mode 100644
    index 0000000..b7a1f4d
    --- /dev/null
    +++ b/.gitattributes
    @@ -0,0 +1 @@
    +.etckeeper merge=union
    
  7. Commit these changes.

  8. Permit pushes to the checked out /etc by running git config receive.denyCurrentBranch warn

  9. All done! Try installing a package that has some configuration and then running sudo gitk in /etc to view the results. You can run a diff by running sudo git diff pristine master.

So, what’s going on under the hood? The big problem that blocked me from a setup like this in the past is that you would like the package manager to apply its changes into the pristine etc, so that you can merge in the changes yourself on the production version, but it’s not obvious how to convince dpkg that /etc lives somewhere else. Nor do you want to revert your system configuration to pristine version, apply the update, and then revert back: this is just asking for trouble. So the idea is to apply the (generated) patch as normal, but then reapply the patch (using a cherry-pick) to the pristine branch, and then rewrite history so the parent pointers are correct. All of this happens outside of /etc, so the production copy of the configuration files never gets touched.

Of course, sometimes the cherry-pick might fail. In that case, you’ll get an error like this:

Branch pristine set up to track remote branch pristine from origin.
Switched to a new branch 'pristine'
error: could not apply 4fed9ce... committing changes in /etc after apt run
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' or 'git rm <paths>'
hint: and commit the result with 'git commit'
Failed to import changes to pristine
TMPREPO = /tmp/etckeeper-gitrepo.CUCpBEuVXg
TREEID = 8c2fbef8a8f3a4bcc4d66d996c5362c7ba8b17df
PARENTID = 94037457fa47eb130d8adfbb4d67a80232ddd214

Do not fret: all that has happened is that the pristine branch is not up-to-date. You can resolve this problem by looking at $TMPREPO/etc, where you will see some sort of merge conflict. Resolve the conflict and commit. Now you will need to manually complete the rest of the script, this can be done with:

git checkout master
git reset --hard HEAD~ # this is the commit we're discarding
git merge -s ours pristine
git push -f origin master
git push origin pristine

To make sure you did it right, go back to /etc and run git status: it should report the working directory as clean. Otherwise, there are discrepancies and you may not have done the merges correctly.

I’ve been testing this setup for a month now, and it has proceeded very smoothly (though I’ve never attempted to do a full release upgrade with this setup). Unfortunately, as I’ve said previously, I don’t have a method for constructing a pristine branch from scratch, if you have an existing system you’d like to apply this trick to. There’s nothing stopping you, though: you can always decide to start, in which case you will record just the diffs from the time you started recording pristine. Give it a spin!

  • January 20, 2014

PEPM’14: The HERMIT in the Stream

POPL is almost upon us! I’ll be live-Tumblr-ing it when the conference comes upon us proper, but in the meantime, I thought I’d write a little bit about one paper in the colocated PEPM'14 program: The HERMIT in the Stream, by Andrew Farmer, Christian Höner zu Sierdissen and Andy Gill. This paper presents an implementation of an optimization scheme for fusing away use of the concatMap combinator in the stream fusion framework, which was developed using the HERMIT optimization framework. The HERMIT project has been chugging along for some time now, and a stream of papers of various applications of the framework have been trickling out (as anyone who was at the Haskell implementors workshop can attest.)

“But wait,” you may ask, “don’t we already have stream fusion?” You’d be right: but while stream fusion is available as a library, it has not replaced the default fusion system that ships with GHC: foldr/build fusion. What makes fusion scheme good? One important metric is the number of list combinators it supports. Stream fusion nearly dominates foldr/build fusion, except for the case of concatMap, a problem which has resisted resolution for seven years and has prevented GHC from switching to using stream fusion as its default.

As it turns out, we’ve known how to optimize concatMap for a long time; Duncan Coutts gave a basic outline in his thesis. The primary contribution of this paper was a prototype implementation of this optimization, including an elucidation of the important technical details (increasing the applicability of the original rule, necessary modifications to the simplifier, and rules for desugaring list comprehensions). The paper also offers some microbenchmarks and real world benchmarks arguing for the importance of optimizing concatMap.

I was glad to see this paper, since it is an important milestone on the way to replacing foldr/build fusion with stream fusion in the GHC standard libraries. It also seems the development of this optimization was greatly assisted by the use HERMIT, which seems like a good validation for HERMIT (though the paper does not go into very much detail about how HERMIT assisted in the process of developing this optimization).

There is something slightly unsatisfying with the optimization as stated in the paper, which can be best articulated by considering the paper from the perspective of a prospective implementor of stream fusion. She has two choices:

  • She can try to use the HERMIT system directly. However, HERMIT induces a 5-20x compilation slowdown, which is quite discouraging for real use. This slowdown is probably not fundamental, and will be erased in due time, but that is certainly not the case today. The limited implementation of stream fusion in the prototype (they don’t implement all of the combinators, just enough so they could run their numbers) also recommends against direct use of the system.
  • She can directly incorporate the rules as stated into a compiler. This would require special-case code to apply the non-semantics preserving simplifications only to streams, and essentially would require a reimplementation of the system, with the guidance offered by this paper. But this special-case code is of limited applicability beyond its utility for concatMap, which is a negative mark.

So, it seems, at least from the perspective of an average GHC user, we will have to wait a bit longer before stream fusion is in our hands. Still, I agree that the microbenchmarks and ADPFusion case study show the viability of the approach, and the general principle of the novel simplification rules seems reasonable, if a little ad hoc.

One note if you’re reading the nofib performance section: the experiment was done comparing their system to foldr/build, so the delta is mostly indicative of the benefit of stream fusion (in the text, they point out which benchmarks benefitted the most from concatMap fusion). Regardless, it’s a pretty cool paper: check it out!

  • January 17, 2014

Ott ⇔ PLT Redex

Ott and PLT Redex are a pair of complimentary tools for the working semanticist. Ott is a tool for writing definitions of programming languages in a nice ASCII notation, which then can be typeset in LaTeX or used to generate definitions for a theorem prover (e.g. Coq). PLT Redex is a tool for specifying and debugging operational semantics. Both tools are easy to install, which is a big plus. Since the tools are quite similar, I thought it might be interesting to do a comparison of how various common tasks are done in both languages. (Also, I think the Redex manual is pretty terrible.)

Variables. In Ott, variables are defined by way of metavariables (metavar x), which then serve as variable (by either using the metavariable alone, or suffixing it with a number, index variable or tick).

In Redex, there is no notion of a metavariable; a variable is just another production. There are a few different ways say that a production is a variable: the simplest method is to use variable-not-otherwise-mentioned, which automatically prevents keywords from acting as variables. There are also several other variable patterns variable, variable-except and variable-prefix, which afford more control over what symbols are considered variables. side-condition may also be useful if you have a function which classifies variables.

Grammar. Both Ott and Redex can identify ambiguous matches. Ott will error when it encounters an ambiguous parse. Redex, on the other hand, will produce all valid parses; while this is not so useful when parsing terms, it is quite useful when specifying non-deterministic operational semantics (although this can have bad performance implications). check-redundancy may be useful to identify ambiguous patterns.

Binders. In Ott, binders are explicitly declared in the grammar using bind x in t; there is also a binding language for collecting binders for pattern-matching. Ott can also generate substitution/free variable functions for the semantics. In Redex, binders are not stated in the grammar; instead, they are implemented solely in the reduction language, usually using substitution (Redex provides a workhorse substitution function for this purpose), and explicitly requiring a variable to be fresh. Redex does have a special-form in the metalanguage for doing let-binding (term-let), which substitutes immediately.

Lists. Ott supports two forms of lists: dot forms and list comprehensions. A dot form looks like x1 , .. , xn and requires an upper bound. A list comprehension looks like </ xi // i IN 1 .. n />; the bounds can be omitted. A current limitation of Ott is that it doesn’t understand how to deal with nested dot forms, this can be worked around by doing a comprension over a production, and then elsewhere stating the appropriate equalities the production satisfies.

Redex supports lists using ellipsis patterns, which looks like (e ...). There is no semantic content here: the ellipses simply matches zero or more copies of e, which can lead to nondeterministic matches when there are multiple ellipses. Nested ellipses are supported, and simply result in nested lists. Bounds can be specified using side-conditions; however, Redex supports a limited form of bounding using named ellipses (e.g. ..._1), where all ellipses with the same name must have the same length.

Semantics. Ott is agnostic to whatever semantics you want to define; arbitrary judgments can be specified. One can also define judgments as usual in Redex, but Redex provides special support for evaluation semantics, in which a semantics is given in terms of evaluation contexts, thus allowing you to avoid the use of structural rules. So a usual use-case is to define a normal expression language, extend the language to have evaluation contexts, and then define a reduction-relation using in-hole to do context decomposition. The limitation is that if you need to do anything fancy (e.g. multi-hole evaluation contexts), you will have to fall back to judgment forms.

Type-setting. Ott supports type-setting by translation into LaTeX. Productions can have custom LaTeX associated with them, which is used to generate their output. Redex has a pict library for directly typesetting into PDF or Postscript; it doesn’t seem like customized typesetting is an intended use-case for PLT Redex, though it can generate reasonable Lisp-like output.

Conclusion. If I had to say what the biggest difference between Ott and PLT Redex was, it is that Ott is primarily concerned with the abstract semantic meaning of your definitions, whereas PLT Redex is primarily concerned with how you would go about matching against syntax (running it). One way to see this is in the fact that in Ott, your grammar is a BNF, which is fed into a CFG parser; whereas in PLT Redex, your grammar is a pattern language for the pattern-matching machine. This should not be surprising: one would expect each tool’s design philosophy to hew towards their intended usage.

  • January 13, 2014

When a lock is better than an MVar

MVars are an amazingly flexible synchronization primitive, which can serve as locks, one-place channels, barriers, etc. or be used to form higher-level abstractions. As far as flexibility is concerned, MVars are the superior choice of primitive for the runtime system to implement—as opposed to just implementing, say, a lock.

However, I was recently thinking about GHC's BlockedIndefinitelyOnMVar exception, and it occurred to me that a native implementation of locks could allow perfect deadlock detection, as opposed to the approximate detection for MVars we currently provide. (I must emphasize, however, that here, I define deadlock to mean a circular waits-for graph, and not “thread cannot progress further.”)

Here is how the new primitive would behave:

  • There would be a new type Lock, with only one function withLock :: Lock -> IO a -> IO a. (For brevity, we do not consider the generalization of Lock to also contain a value.)
  • At runtime, the lock is represented as two closure types, indicating locked and unlocked states. The locked closure contains a waiting queue, containing threads which are waiting for the lock.
  • When a thread takes out a free lock, it adds the lock to a (GC'd) held locks set associated with the thread. When it returns the lock, the lock is removed from this set.
  • When a thread attempts to take a busy lock, it blocks itself (waiting for a lock) and adds itself to the waiting queue of the locked closure.
  • Critically, references to the lock are treated as weak pointers when the closure is locked. (Only the pointer from the held lock set is strong.) Intuitively, just because a pointer to the lock doesn’t mean you can unlock; the only person who can unlock it is the thread who has the lock in their held locks set.
  • If a thread attempts to take out a lock on a dead weak pointer, it is deadlocked.

Theorem. Any set of threads in a waits-for cycle is unreachable, if there are no other pointers to thread besides the pointer from the waiting queue of the locks in the cycle.

Proof. Consider a single thread in the cycle: we show that the only (strong) pointer to it is from the previous thread in the cycle. When a thread is blocked, it is removed from the run queue (which counts as a GC root). Given the assumption, the only pointer to the thread is from the waiting queue of the lock it is blocked on. We now consider pointers to the lock it is blocked on. As this lock is busy, all pointers to it are weak, except for the pointer from the thread which is holding the lock. But this is exactly the previous thread in the cycle. ■

At the cost of a weak-pointer dereference when a lock is taken out, we can now achieve perfect deadlock detection. Deadlock will be detected as soon as a garbage collection runs that detects the dead cycle of threads. (At worst, this will be the next major GC.)

Why might this be of interest? After all, normally, it is difficult to recover from a deadlock, so while accurate deadlock reporting might be nice-to-have, it is by no means necessary. One clue comes from a sentence in Koskinen and Herlihy's paper Dreadlocks: Efficient Deadlock Detection: “an application that is inherently capable of dealing with abortable lock requests...is software transactional memory (STM).” If you are in an STM transaction, deadlock is no problem at all; just rollback one transaction, breaking the cycle. Normally, one does not take out locks in ordinary use of STM, but this can occur when you are using a technique like transactional boosting (from the same authors; the relationship between the two papers is no coincidence!)

Exercise for the reader, formulate a similar GC scheme for MVars restricted to be 1-place channels. (Hint: split the MVar into a write end and a read end.)

  • January 7, 2014

So you want to add a new concurrency primitive to GHC…

One of the appealing things about GHC is that the compiler is surprisingly hackable, even when you don’t want to patch the compiler itself. This hackability comes from compiler plugins, which let you write custom optimization passes on Core, as well as foreign primops, which let you embed low-level C-- to manipulate the low-level representation of various primitives. These hooks let people implement and distribute features that would otherwise be to unstable or speculative to put into the compiler proper.

A particular use-case that has garnered some amount of interest recently is that of concurrency primitives. We engineers like to joke that, in the name of performance, we are willing to take on nearly unbounded levels of complexity: but this is almost certainly true when it comes to concurrency primitives, where the use of ever more exotic memory barriers and concurrent data structures can lead to significant performance boosts (just ask the Linux kernel developers). It’s very tempting to look at this situation and think, “Hey, we could implement this stuff in GHC too, using the provided compiler hooks!” But there are a lot of caveats involved here.

After answering a few questions related to this subject on the ghc-devs list and noticing that many of the other responses were a bit garbled, I figured I ought to expand on my responses a bit in a proper blog post. I want to answer the following questions:

  1. What does it mean to have a memory model for a high-level language like Haskell? (You can safely skip this section if you know what a memory model is.)
  2. What is (GHC) Haskell’s memory model?
  3. How would I go about implementing a (fast) memory barrier in GHC Haskell?

Memory models are semantics

What is a memory model? If you ask a hardware person, they might tell you, “A memory model is a description of how a multi-processor CPU interacts with its memory, e.g. under what circumstances a write by one processor is guaranteed to be visible by another.” If you ask a compiler person, they might tell you, “A memory model says what kind of compiler optimizations I’m allowed to do on operations which modify shared variables.” A memory model must fulfill both purposes (a common misconception is that it is only one or the other). To be explicit, we define a memory model as follows (adapted from Adve-Boehm):

A memory model is a semantics for shared variables, i.e. the set of values that a read in a program is allowed to return.

That’s right: a memory model defines the behavior of one the most basic operations in your programming language. Without it, you can’t really say what your program is supposed to do.

Why, then, are memory models so rarely discussed, even in a language community that is so crazy about semantics? In the absence of concurrency, the memory model is irrelevant: the obvious semantics apply. In the absence of data races, the memory model can be described quite simply. For example, a Haskell program which utilizes only MVars for inter-thread communication can have its behavior described completely using a relatively simple nondeterministic operational semantics (see Concurrent Haskell paper (PS)); software transactional memory offers high-level guarantees of atomicity with respect to reads of transactional variables. Where a memory model becomes essential is when programs contain data races: when you have multiple threads writing and reading IORefs without any synchronization, a memory model is responsible for defining the behavior of this program. With modern processors, this behavior can be quite complex: we refer to these models as relaxed memory models. Sophisticated synchronization primitives will often take advantage of a relaxed memory model to avoid expensive synchronizations and squeeze out extra performance.

GHC Haskell’s memory (non) model

One might say the Haskell tradition is one that emphasizes the importance of semantics... except for a number of notable blind spots. The memory model is one of those blind spots. The original Haskell98 specification did not contain any specification of concurrency. Concurrent Haskell paper (PS) gave a description of semantics for how concurrency might be added to the language, but the paper posits only the existence of MVars, and is silent on how MVars ought to interact with IORefs.

One of the very first discussions that took place on the haskell-prime committee when it was inaugurated in 2006 was whether or not Concurrent Haskell should be standardized. In the discussion, it was quickly discovered that a memory model for IORefs would be needed (continued here). As of writing, no decision has been made as to whether or not IORefs should have a strong or weak memory model.

The upshot is that, as far as Haskell the standardized language goes, the behavior here is completely undefined. To really be able to say anything, we’ll have to pick an implementation (GHC Haskell), and we’ll have to infer which aspects of the implementation are specified behavior, as opposed to things that just accidentally happen to hold. Notably, memory models have implications for all levels of your stack (it is a common misconception that a memory barrier can be used without any cooperation from your compiler), so to do this analysis we’ll need to look at all of the phases of the GHC compilation chain. Furthermore, we’ll restrict ourselves to monadic reads/writes, to avoid having to wrangle with the can of worms that is laziness.

Here’s GHC’s compilation pipeline in a nutshell:

/img/ghc-pipeline.png

At the very top of the compiler pipeline lie the intermediate languages Core and STG. These will preserve sequential consistency with no trouble, as the ordering of reads and writes is fixed by the use of monads, and preserved throughout the desugaring and optimization passes: as far as the optimizer is concerned, the primitive operations which implement read/write are complete black boxes. In fact, monads will over-sequentialize in many cases! (It is worth remarking that rewrite rules and GHC plugins could apply optimizations which do not preserve the ordering imposed by monads. Of course, both of these facilities can be used to also change the meaning of your program entirely; when considering a memory model, these rules merely have a higher burden of correctness.)

The next step of the pipeline is a translation into C--, a high-level assembly language. Here, calls to primitive operations like readMutVar# and writeMutVar# are translated into actual memory reads and writes in C--. Importantly, the monadic structure that was present in Core and STG is now eliminated, and GHC may now apply optimizations which reorder reads and writes. What actually occurs is highly dependent on the C-- that is generated, as well as the optimizations that GHC applies, and C-- has no memory model, so we cannot appeal to even that.

This being said, a few things can be inferred from a study of the optimization passes that GHC does implement:

  • GHC reserves the right to reorder stores: the WriteBarrier mach-op (NB: not available from Haskell!) is defined to prevent future stores from occurring before preceding stores. In practice, GHC has not implemented any C-- optimizations which reorder stores, so if you have a story for dealing with the proceeding stages of the pipeline, you can dangerously assume that stores will not be reordered in this phase.
  • GHC reserves the right to reorder loads, and does so extensively. One of the most important optimizations we perform is a sinking pass, where assignments to local variables are floated as close to their use-sites as possible. As of writing, there is no support for read barrier, which would prevent this floating from occurring.

There are a few situations where we happen to avoid read reordering (which may be dangerously assumed):

  • Reads don’t seem to be reordered across foreign primops (primops defined using the foreign prim keywords). This is because foreign primops are implemented as a jump to another procedure (the primop), and there are no inter-procedural C-- optimizations at present.
  • Heap reads don’t seem to be reordered across heap writes. This is because we currently don’t do any aliasing analysis and conservatively assume the write would have clobbered the read. (This is especially dangerous to assume, since you could easily imagine getting some aliasing information from the frontend.)

Finally, the C-- is translated into either assembly (via the NCG—N for native) or to LLVM. During translation, we convert the write-barrier mach-op into an appropriate assembly instruction (no-op on x86) or LLVM intrinsic (sequential consistency barrier); at this point, the behavior is up to the memory model defined by the processor and/or by LLVM.

It is worth summarizing the discussion here by comparing it to the documentation at Data.IORef, which gives an informal description of the IORef memory model:

In a concurrent program, IORef operations may appear out-of-order to another thread, depending on the memory model of the underlying processor architecture...The implementation is required to ensure that reordering of memory operations cannot cause type-correct code to go wrong. In particular, when inspecting the value read from an IORef, the memory writes that created that value must have occurred from the point of view of the current thread.

In other words, “We give no guarantees about reordering, except that you will not have any type-safety violations.” This behavior can easily occur as a result of reordering stores or loads. However, the type-safety guarantee is an interesting one: the last sentence remarks that an IORef is not allowed to point to uninitialized memory; that is, we’re not allowed to reorder the write to the IORef with the write that initializes a value. This holds easily on x86, due to the fact that C-- does not reorder stores; I am honestly skeptical that we are doing the right thing on the new code generator for ARM (but no one has submitted a bug yet!)

What does it all mean?

This dive into the gory internals of GHC is all fine and nice, but what does it mean for you, the prospective implementor of a snazzy new concurrent data structure? There are three main points:

  1. Without inline foreign primops, you will not be able to convince GHC to emit the fast-path assembly code you are looking for. As we mentioned earlier, foreign primops currently always compile into out-of-line jumps, which will result in a bit of extra cost if the branch predictor is unable to figure out the control flow. On the plus side, any foreign primop call will accidentally enforce the compiler-side write/read barrier you are looking for.
  2. With inline foreign primops, you will still need make modifications to GHC in order to ensure that optimization passes respect your snazzy new memory barriers. For example, John Lato’s desire for a load-load barrier (the email which kicked off this post) will be fulfilled with no compiler changes by a out-of-line foreign primop, but not by the hypothetical inline foreign primop.
  3. This stuff is really subtle; see the position paper Relaxed memory models must be rigorous, which argues that informal descriptions of memory models (like this blog post!) are far too vague to be useful: if you want to have any hope of being correct, you must formalize it! Which suggests an immediate first step: give C-- a memory model. (This should be a modest innovation over the memory models that C and C++ have recently received.)

For the rest of us, we’ll use STM instead, and be in a slow but compositional and dead-lock free nirvana.

  • January 1, 2014

Two bugs in the borrow checker every Rust developer should know about

Apologies in advance: this post assumes familiarity with Rust.

Anyone who has done some coding in Rust may be familiar with the dreaded borrow checker, famous for obstructing the compilation of otherwise “perfectly reasonable code.” In many cases, the borrow checker is right: you’re writing your code wrong, and there is another, clearer way to write your code that will appease the borow checker. But sometimes, even after you’ve skimmed the tutorial, memorized the mantra “a &mut pointer is the only way to mutate the thing that it points at” and re-read the borrowed pointers tutorial, the borrow-checker might still stubbornly refuse to accept your code.

If that’s the case, you may have run into one of the two (in)famous bugs in the borrow-checker. In this post, I want to describe these two bugs, give situations where they show up and describe some workarounds. This is the kind of post which I hope becomes obsolete quickly, but the fixes for them are pretty nontrivial, and you are inevitably going to run into these bugs if you try to program in Rust today.

Mutable borrows are too eager (#6268)

Summary. When you use &mut (either explicitly or implicitly), Rust immediately treats the lvalue as borrowed and imposes its restrictions (e.g. the lvalue can’t be borrowed again). However, in many cases, the borrowed pointer is not used until later, so imposing the restrictions immediately results in spurious errors. This situation is most likely to occur when there is an implicit use of &mut. (Bug #6268)

Symptoms. You are getting the error “cannot borrow `foo` as immutable because it is also borrowed as mutable”, but the reported second borrow is an object dispatching a method call, or doesn’t seem like it should have been borrowed at the time the flagged borrow occured.

Examples. The original bug report describes the situation for nested method calls, where the outer method call has &mut self in its signature:

fn main() {
  let mut map = std::hashmap::HashMap::new();
  map.insert(1, 2);
  map.insert(2, *map.get(&1)); // XXX
}

test.rs:4:17: 4:20 error: cannot borrow `map` as immutable because it is also borrowed as mutable
test.rs:4   map.insert(2, *map.get(&1)); // XXX
                           ^~~
test.rs:4:2: 4:5 note: second borrow of `map` occurs here
test.rs:4   map.insert(2, *map.get(&1)); // XXX
            ^~~

This code would like to retrieve the value at key 1 and store it in key 2. Why does it fail? Consider the signature fn insert(&mut self, key: K, value: V) -> bool: the insert method invocation immediately takes out a &mut borrow on map before attempting to evaluate its argument. If we desugar the method invocation, the order becomes clear: HashMap::insert(&mut map, 2, *map.get(&1)) (NB: this syntax is not implemented yet). Because Rust evaluates arguments left to right, this is equivalent to:

let x_self : &mut HashMap<int> = &mut map;
let x_arg1 : int = 2;
let x_arg2 : int = *map.get(&1); // XXX
HashMap::insert(x_self, x_arg1, x_arg2);

meaning there is an active borrow by the time we call map.get. A minor rewrite resolves the problem:

fn main() {
  let mut map = std::hashmap::HashMap::new();
  map.insert(1, 2);
  let x = *map.get(&1);
  map.insert(2, x);
}

Sensitivity to order of arguments even when no method invocation is involved. Here is another example in which there is no method invocation:

fn g(x: &mut int) -> int { *x }
fn f(x: &mut int, y: int) { *x += y; }
fn main() {
    let mut a = 1;
    f(&mut a, g(&mut a));
}

Discussion. Fortunately, this bug is pretty easy to work around, if a little annoying: move all of your sub-expressions to let-bindings before the ill-fated mutable borrow (see examples for a worked example). Note: the borrows that occur in these sub-expressions really do have to be temporary; otherwise, you have a legitimate “cannot borrow mutable twice” error on your hands.

Borrow scopes should not always be lexical (#6393)

Summary. When you borrow a pointer, Rust assigns it a lexical scope that constitutes its lifetime. This scope can be as small as a single statement, or as big as an entire function body. However, Rust is unable to calculate lifetimes that are not lexical, e.g. a borrowed pointer is only live until halfway through a function. As a result, borrows may live longer than users might expect, causing the borrow checker to reject some statements. (Bug #6393)

Symptoms. You are getting a “cannot borrow foo as immutable/mutable because it is also borrowed as immutable/mutable”, but you think the previous borrow should have already expired.

Examples. This problem shows up in a variety of situations. The very simplest example which tickles this bug can be seen here:

fn main() {
    let mut x = ~1;
    let y = &mut *x;
    *y = 1;
    let z = &mut *x;
    *z = 1;
}

test.rs:5:12: 5:19 error: cannot borrow `*x` as mutable more than once at a time
test.rs:5     let z = &mut *x;
                      ^~~~~~~
test.rs:3:12: 3:19 note: second borrow of `*x` as mutable occurs here
test.rs:3     let y = &mut *x;
                      ^~~~~~~

Clearly y is dead after *y = 1, but the borrow checker can’t see that. Fortunately, in this case it is very easy to add a new lexical scope to solve the problem:

fn main() {
    let mut x = ~1;
    {
        let y = &mut *x;
        *y = 1;
    }
    let z = &mut *x;
    *z = 1;
}

So, when does this actually become a problem? The usual culprit is match statements. Here is some common code involving maps that you might want to write:

extern mod extra;
fn main() {
    let mut table = extra::treemap::TreeMap::new();
    let key = ~"test1";
    match table.find_mut(&key) {
        None    => table.insert(key.clone(), ~[1]), // XXX
        Some(v) => { v.push(1); false }
    };
}

test.rs:6:19: 6:24 error: cannot borrow `table` as mutable more than once at a time
test.rs:6         None    => table.insert(key.clone(), ~[1]), // XXX
                             ^~~~~
test.rs:5:10: 5:15 note: second borrow of `table` as mutable occurs here
test.rs:5     match table.find_mut(&key) {
                    ^~~~~

table is a map of integer keys to vectors. The code performs an insert at key: if the map has no entry, then we create a new singleton vector and insert it in that location; otherwise, it just pushes the value 1 onto the existing vector. Why is table borrowed in the None branch? Intuitively, the borrow for table.find_mut should be dead, since we no longer are using any of the results; however, to Rust, the only lexical scope it can assign the borrowed pointer encompasses the entire match statement, since the borrowed pointer continues to be used in the Some branch (note that if the Some branch is removed, this borrow checks). Unfortunately, it’s not possible to insert a new lexical scope, as was possible in the previous example. (At press time, I wasn’t able to find a small example that only used if.)

Sometimes, the lifetime associated with a variable can force it to be assigned to a lexical scope that is larger than you would expect. Issue #9113 offers a good example of this (code excerpted below):

pub fn read1<'a>(&'a mut self, key: int) -> Option<&'a Data> {
    match self.cache.find(&key) {
        Some(data) => return Some(data),
        None => ()
    };
    match self.db.find(&key) {
        Some(data) => {
            let result: &Data = self.cache.find_or_insert(key, data.clone());
            Some(result)
        },
        None => None
    }
}

test.rs:22:36: 22:46 error: cannot borrow `(*self).cache` as mutable because it is also borrowed as immutable
test.rs:22                 let result: &Data = self.cache.find_or_insert(key, data.clone());
                                               ^~~~~~~~~~
test.rs:15:14: 15:24 note: second borrow of `(*self).cache` occurs here
test.rs:15         match self.cache.find(&key) {
                         ^~~~~~~~~~

This code is attempting to perform a database lookup; it first consults the cache and returns a cached entry if available. Otherwise, it looks for the value in the database, caching the value in the process. Ordinarily, you would expect the borrow of self.cache in the first match to extend only for the first expression. However, the return statement throws a spanner in the works: it forces the lifetime of data to be 'a, which encompasses the entire function body. The borrow checker then concludes that there is a borrow everywhere in the function, even though the function immediately returns if it takes out this borrow.

Discussion. The workaround depends on the nature of the scope that is causing trouble. When match is involved, you can usually arrange for the misbehaving borrow to be performed outside of the match statement, in a new, non-overlapping lexical scope. This is easy when the relevant branch does not rely on any variables from the pattern-match by using short-circuiting control operators:

extern mod extra;
use extra::treemap::TreeMap;
fn main() {
    let mut table: TreeMap<~str,~[int]> = TreeMap::new();
    let key = ~"test1";
    match table.find_mut(&key) {
        None    => {},
        Some(v) => { v.push(1); return }
    };
    table.insert(key.clone(), ~[1]); // None-case
}

Alternately, instead of directly returning, the match can assign a boolean to indicate whether or not the None-case should be run:

extern mod extra;
use extra::treemap::TreeMap;
fn main() {
    let mut table: TreeMap<~str,~[int]> = TreeMap::new();
    let key = ~"test1";
    let is_none = match table.find_mut(&key) {
        None    => true,
        Some(v) => { v.push(1); false }
    };
    if is_none {
        table.insert(key.clone(), ~[1]);
    }
}

The boolean can be elaborated into an enum that holds any non-references from the pattern-match you might need. Note that this will not work for borrowed references; but in that case, the borrow truly was still live!

It is a bit more difficult to workaround problems regarding lifetimes, since there is nowhere in the function the pointer is not “borrowed”. One trick which can work in some situations is to convert the function to continuation passing style: that is, instead of returning the borrowed pointer, accept a function argument which gets invoked with the function. pnkfelix describes how you might go about fixing the third example. This removes the lifetime constraint on the variable and resolves the problem.

The lexical scope assigned to a borrow can be quite sensitive to code pertubation, since removing a use of a borrow can result in Rust assigning a (much) smaller lexical scope to the borrow, which can eliminate the error. Sometimes, you can avoid the problem altogether by just avoiding a borrow.

Conclusion

To sum up:

  1. Bug #6268 can cause borrows to start too early (e.g. in method invocations), work around it by performing temporary borrows before you do the actual borrow.
  2. Bug #6393 can cause borrows to end too late (e.g. in match statements), work around it by deferring operations that need to re-borrow until the original lexical scope ends.

Keep these in mind, and you should be able to beat the borrow checker into submission. That is, until Niko fixes these bugs.

  • December 17, 2013

Visualizing a block allocator

GHC’s block allocator is a pretty nifty piece of low-level infrastructure. It offers a much more flexible way of managing a heap, rather than trying to jam it all in one contiguous block of memory, and is probably something that should be of general interest to anyone who is implementing low-level code like a runtime. The core idea behind it is quite old (BIBOP: Big Bag of Pages), and is useful for any situation where you have a number of objects that are tagged with the same descriptor, and you don’t want to pay the cost of the tag on each object.

Managing objects larger than pages is a bit tricky, however, and so I wrote a document visualizing the situation to help explain it to myself. I figured it might be of general interest, so you can get it here: http://web.mit.edu/~ezyang/Public/blocks.pdf

Some day I’ll convert it into wikiable form, but I don’t feel like Gimp'ing the images today...

  • October 30, 2013

Xmonad and media keys on Saucy

Ubuntu continues on its rampage of breaking perfectly good software, and on my most recent upgrade to Saucy Salamander, I discovered to my dismay that my media keys (e.g. volume keys, fn (function) keys, suspend button, etc) had stopped working. Of course, it worked fine if I logged into my user using Unity, but who wants to use a silly window manager like that...

The root problem, according to these Arch Linux forum posts is that Gnome has moved media-key support out of gnome-settings-daemon (which any self-respecting Xmonad user is sure to spawn) and into their window manager proper. Which, of course, is no good because I don’t want to use their window manager!

For now, it seems the simplest method of bringing back this functionality is to run a 3.6 version of gnome-settings-daemon. Fortunately, at least for Saucy, there are a few builds of 3.6 available before they upgraded to 3.8. So, all you need to do is grab these two deb files appropriate for your architecture (you need gnome-control-center too, because it has a dependency on gnome-settings-daemon):

Once you've downloaded the appropriate deb files, a dpkg -i $DEBFILE and then apt-mark hold gnome-control-center gnome-settings-daemon should do the trick. You should run an aptitude upgrade to make sure you haven't broken any other dependencies (for example, gnome-shell). (Power-users can add the debs to a local repo and then downgrade explicitly from apt-get.)

Moving forward, we will probably be forced to reimplement media key bindings in some other software package, and it would be nice if this could be standardized in some way. Linux Mint has already forked gnome-settings-daemon, with their cinnamon-settings-daemon, but I've not tried it, and have no idea how well it works.

Update. Trusty has an updated version of this package which restores support, so I am providing backports via my PPA.

  • October 27, 2013

If you’re using lift, you’re doing it wrong (probably)

David Darais asked me to make this public service announcement: If you're using lift, you're doing it wrong. This request was prompted by several talks at ICFP about alternatives to monad transformers in Haskell, which all began their talk with the motivation, "Everyone hates lifting their operations up the monad stack; therefore, we need another way of organizing effects." This StackOverflow question describes the standard technique that mtl uses to remove the use of lift in most monadic code.

Now, as most things go, the situation is a bit more nuanced than just "never use lift", and a technically incorrect quip at the beginning of a talk does not negate the motivation behind other effect systems. Here are some of the nuances:

  • As everyone is well aware, when a monad transformer shows up multiple times in the monad stack, the automatic type class resolution mechanism doesn't work, and you need to explicitly say which monad transformer you want to interact with.
  • This mechanism only works if the monadic operations you are interacting with are suitably generalized to begin with, e.g. MonadReader a m => m a rather than Monad m => ReaderT m a or Reader a. This is especially evident for the IO monad, where most people have not generalized their definitions to MonadIO. Fortunately, it is generally the case that only one liftIO is necessary.

And of course, there are still many reasons why you would want to ditch monad transformers:

  • Type-class instances are inherently unordered, and thus a generalized MonadCont m, MonadState m => m a monadic value says nothing about what order the two relevant monads are composed. But the order of this composition has an important semantic effect on how the monad proceeds (does the state transfer or reset over continuation jumps). Thus, monad transformers can have subtle interactions with one another, when sometimes you want non-interfering effects that are truly commutative with one another. And indeed, when you are using the type class approach, you usually use only monads that commute with one another.
  • The interference between different monad transformers makes it difficult to lift certain functions. For example, the type of mask :: ((forall a. IO a -> IO a) -> IO b) -> IO b. If we think operationally what has to happen when IO is composed with State, the lifter has to some how arrange for the state to transfer all the way into the code that runs with exceptions restored. That's very tricky to do in a general way. It gets even worse when these callbacks are invoked multiple times.
  • At the end of the day, while the use of type classes makes the monad stack somewhat abstract and allows the elision of lifts, most of this code is written with some specific monad stack in mind. Thus, it is very rare for nontrivial programs to make use of multiple effects in a modular way, or for effects to be instantiated (i.e. a concrete monad selected) without concretizing the rest of the monad stack.

Monad transformers have problems, let's argue against them for the right reasons!

  • September 26, 2013

Of Monadic Fixpoints and Heap Offsets

Here at ICFP, sometimes the so-called “hallway track” is sometimes just as important as the ordinary track. Johan Tibell was wanting to avoid an out-of-line call to allocate function in GHC when a small array of statically known size was allocated. But he found the way that GHC's new code generator handles heap allocation a bit confusing, and so we skipped out of one session today to work it out. In this post, I would like to explain how the code generation monad figures out what the heap offsets in the code are, by way of a kind of cute (and also slightly annoying) trick involving a “monadic” fixpoint.

First, some background about the code generator. The big overall pattern of a function that GHC has to generate code for is something like:

  1. Check if there is enough heap space, if not GC,
  2. Write a bunch of data to the heap,
  3. Push some things to the stack,
  4. Jump to the appropriate continuation.

Concretely, the code will be along the lines of:

c2EP:
    _s28e::P64 = R2;
    // Check if there is enough heap space
    Hp = Hp + 40;
    if (Hp > HpLim) goto c2ET; else goto c2ES;
c2ET:
    // If not enough space, GC
    HpAlloc = 40;
    R2 = _s28e::P64;
    R1 = withEmpty_riC_static_closure;
    call (stg_gc_fun)(R2, R1) args: 8, res: 0, upd: 8;
c2ES:
    // Write a bunch of data to the heap
    I64[Hp - 32] = sat_s28f_info;
    _c2EG::P64 = Hp - 32;
    I64[Hp - 16] = :_con_info;
    P64[Hp - 8] = _c2EG::P64;
    P64[Hp] = _s28e::P64;
    _c2EO::P64 = Hp - 14;
    R1 = _c2EO::P64;
    // No stack updates this time
    // Jump to the continuation
    call (P64[Sp])(R1) args: 8, res: 0, upd: 8;

This seems reasonable, but how does one go about actually generating this code? The code is generated in order, but the amount of heap that needs to be checked is not known until we've finished laying out the rest of the code. If we put on our mutation hats, we might say, “Well, leave it out for now, and then mutate it in when you know the actual value”, but there is still the knotty question of what the offsets should be when we are writing values to the heap. Notice that in the above code, we only bump the heap pointer once; if we repeatedly bump the heap pointer, then the offsets are easy to calculate, but we are wasting instructions; x86 addressing modes support writing to a register plus some offset directly.

Let’s take a look what GHC does when it allocates a dynamic closure to the heap (simplified):

allocDynClosureCmm info_tbl args_offsets
  = do  { virt_hp <- getVirtHp
        ; let rep = cit_rep info_tbl -- cit = c info table
              info_offset = virt_hp + 1 -- virtual heap offset of first word of new object
              info_ptr = CmmLit (CmmLabel (cit_lbl info_tbl))
        ; base <- getHpRelOffset (virt_hp + 1)
        ; emitSetDynHdr base info_ptr
        ; let (args, offsets) = unzip args_offsets
        ; hpStore base args offsets
        ; setVirtHp (virt_hp + heapClosureSize rep)
        ; getHpRelOffset info_offset
        }

In words, it:

  1. Retrieves a “virtual heap pointer” (more on this later),
  2. Gets the true Hp - n expression (base) using the virtual heap pointer (getHpRelOffset, N.B. the off-by-one),
  3. Emits a bunch of writes to the memory at base (emitSetDynHdr and hpStore),
  4. Bumps the virtual Hp up with the size of the just allocated closure,
  5. Returns the Hp - n expression.

As it turns out, the virtual heap pointer is just an ordinary state variable in the code generation monad FCode (it’s good to take a look at the implementation of the monad you’re using!):

newtype FCode a = FCode (CgInfoDownwards -> CgState -> (# a, CgState #))

data CgState
  = MkCgState { ...
     cgs_hp_usg  :: HeapUsage,
     ... }

data HeapUsage =
  HeapUsage {
        virtHp :: VirtualHpOffset, -- Virtual offset of highest-allocated word
                                   --   Incremented whenever we allocate
        realHp :: VirtualHpOffset  -- realHp: Virtual offset of real heap ptr
                                   --   Used in instruction addressing modes
  }

So virtHp just marches upwards as we allocate things; it is, in effect, the contents of the Hp register in our inefficient, rebumping implementation.

Which leaves us with the pressing question, what is realHp? Well, it starts off as zero (since the offset of the real heap pointer is just zero), but once we bump the heap pointer to do the stack check, it is now precisely the amount of heap we did the heap check for. Calling back our example:

c2EP:
    _s28e::P64 = R2;
    // Check if there is enough heap space
    // virtHp = 0; realHp = 0
    Hp = Hp + 40;
    // virtHp = 0; realHp = 40
    if (Hp > HpLim) goto c2ET; else goto c2ES;
c2ET:
    // If not enough space, GC
    HpAlloc = 40;
    R2 = _s28e::P64;
    R1 = withEmpty_riC_static_closure;
    call (stg_gc_fun)(R2, R1) args: 8, res: 0, upd: 8;
c2ES:
    // Write a bunch of data to the heap
    // First closure
    // virtHp = 0; realHp = 40
    I64[Hp - 32] = sat_s28f_info;
    _c2EG::P64 = Hp - 32;
    // virtHp = 8; realHp = 40
    I64[Hp - 16] = :_con_info;
    P64[Hp - 8] = _c2EG::P64;
    P64[Hp] = _s28e::P64;
    _c2EO::P64 = Hp - 14;
    // virtHp = 32; realHp = 40
    R1 = _c2EO::P64;
    // No stack updates this time
    // Jump to the continuation
    call (P64[Sp])(R1) args: 8, res: 0, upd: 8;

(Actually, internally the offsets are recorded as words, so, this being 64-bit code, divide everything by eight. BTW, virtHp + 8 == realHp, and that's where the off-by-one comes from.) The math is a little fiddly, but getHpRelOffset will calculate the offsets for you; you just have to make sure the virtual offset is right!

OK, but we still haven’t figured out how we get this magic number 40 from in the first place! The key is to look at the code generator responsible for doing the heap check, heapCheck, which is wraps the call to code, which is actually responsible for the code generation:

heapCheck :: Bool -> Bool -> CmmAGraph -> FCode a -> FCode a
heapCheck checkStack checkYield do_gc code
  = getHeapUsage $ \ hpHw ->

Hey, what's that magic getHeapUsage function?

-- 'getHeapUsage' applies a function to the amount of heap that it uses.
-- It initialises the heap usage to zeros, and passes on an unchanged
-- heap usage.
--
-- It is usually a prelude to performing a GC check, so everything must
-- be in a tidy and consistent state.
--
-- Note the slightly subtle fixed point behaviour needed here

getHeapUsage :: (VirtualHpOffset -> FCode a) -> FCode a
getHeapUsage fcode
  = do  { info_down <- getInfoDown
        ; state <- getState
        ; let   fstate_in = state { cgs_hp_usg  = initHpUsage }
                (r, fstate_out) = doFCode (fcode hp_hw) info_down fstate_in
                hp_hw = heapHWM (cgs_hp_usg fstate_out)        -- Loop here!

        ; setState $ fstate_out { cgs_hp_usg = cgs_hp_usg state }
        ; return r }

And here, we see the monadic fixpoint. In order to provide the heap usage to fcode, GHC writes itself a check: hp_hw. The check is borrowed from the result of generating fcode, and the string attached is this: “As long as you don’t cash this check before you finish generating the code, everything will be OK!” (It’s a bit like a big bank in that respect.) Cute—and we only need to do the code generation once!

This technique is not without its dark side. hp_hw is dangerous; if you force it in the wrong place, you will chunder into an infinite loop. There are two uses of this variable, both in compiler/codeGen/StgCmmLayout.hs, which are careful not to force it. What would be nice is if one could explicitly mark hp_hw as blackholed, and attach a custom error message, to be emitted in the event of an infinite loop. How this might be accomplished is left as an exercise for the reader.


BTW, in case you aren't aware, I've been live-tumblr'ing coverage of ICFP at http://ezyang.tumblr.com — the coverage is not 100%, and the editing is rough, but check it out!

  • September 24, 2013