ezyang’s blog

the arc of software bends towards understanding

Hindley-Milner with top-level existentials

Content advisory: This is a half-baked research post.

Abstract. Top-level unpacking of existentials are easy to integrate into Hindley-Milner type inference. Haskell should support them. It's possible this idea can work for internal bindings of existentials as well (ala F-ing modules) but I have not worked out how to do it.

Update. And UHC did it first!

Update 2. And rank-2 type inference is decidable (and rank-1 existentials are an even weaker system), although the algorithm for rank-2 inference requires semiunification.

Background

The difference between Hindley-Milner and System F. Although in informal discussion, Hindley-Milner is commonly described as a “type inference algorithm”, it should properly be described as a type system which is more restrictive than System F. Both type systems allow polymorphism via universal quantification of variables, but in System F this polymorphism is explicit and can occur anywhere, whereas in Hindley-Milner the polymorphism is implicit, and can only occur at the “top level” (in a so-called “polytype” or “type scheme.”) This restriction of polymorphism is the key which makes inference (via Algorithm W) for Hindley-Milner decidable (and practical), whereas inference for System F undecidable.

-- Hindley Milner
id :: a -> a
id = λx. x

-- System F
id :: ∀a. a -> a
id = Λa. λ(x : a). x

Existential types in System F. A common generalization of System F is to equip it with existential types:

Types  τ ::= ... | ∃a. τ
Terms  e ::= ... | pack <τ, e>_τ | unpack <a, x> = e in e

In System F, it is technically not necessary to add existentials as a primitive concept, as they can be encoded using universal quantifiers by saying ∃a. τ = ∀r. (∀a. τ → r) → r.

Existential types in Hindley-Milner? This strategy will not work for Hindley-Milner: the encoding requires a higher-rank type, which is precisely what Hindley-Milner rules out for the sake of inference.

In any case, it is a fool's game to try to infer existential types: there's no best type! HM always infers the most general type for an expression: e.g., we will infer f :: a -> a for the function f = \x -> x, and not Int -> Int. But the whole point of data abstraction is to pick a more abstract type, which is not going to be the most general type and, consequently, is not going to be unique. What should be abstract, what should be concrete? Only the user knows.

Existential types in Haskell. Suppose that we are willing to write down explicit types when existentials are packed, can Hindley-Milner do the rest of the work: that is to say, do we have complete and decidable inference for the rest of the types in our program?

Haskell is an existence (cough cough) proof that this can be made to work. In fact, there are two ways to go about doing it. The first is what you will see if you Google for “Haskell existential type”:

{-# LANGUAGE ExistentialQuantification #-}
data Ex f = forall a. Ex (f a)
pack :: f a -> Ex f
pack = Ex
unpack :: Ex f -> (forall a. f a -> r) -> r
unpack m k = case m of Ex x -> f x

Ex f is isomorphic to ∃a. f a, and similar to the System F syntax, they can be packed with the Ex constructor and unpacked by pattern-matching on them.

The second way is to directly use the System F encoding using Haskell's support for rank-n types:

{-# LANGUAGE RankNTypes #-}
type Ex f = forall r. (forall a. f a -> r) -> r
pack :: f a -> Ex f
pack x = \k -> k x
unpack :: Ex f -> (forall a. f a -> r) -> r
unpack m k = m k

The boxy types paper demonstrated that you can do inference, so long as all of your higher rank types are annotated. Although, perhaps it was not as simple as hoped, since impredicative types are a source of constant bugs in GHC's type checker.

The problem

Explicit unpacks suck. As anyone who has tried programming with existentials in Haskell can attest, the use of existentials can still be quite clumsy due to the necessity of unpacking an existential (casing on it) before it can be used. That is to say, the syntax let Ex x = ... in ... is not allowed, and it is an easy way to get GHC to tell you its brain exploded.

Leijen investigated the problem of handling existentials without explicit unpacks.

Loss of principal types without explicit unpacks, and Leijen's solution. Unfortunately, the naive type system does not have principal types. Leijen gives an example where there is no principal type:

wrap :: forall a. a -> [a]
key  :: exists b. Key b
-- What is the type of 'wrap key'?
-- [exists b. Key b]?
-- exists b. [key b]?

Neither type is a subtype of the other. In his paper, Leijen suggests that the existential should be unwrapped as late as possible (since you can go from the first type to the second, but not vice versa), and thus, the first type should be preferred.

The solution

A different approach. What if we always lift the existential to the top level? This is really easy to do if you limit unpacks to the top-level of a program, and it turns out this works really well. (The downside is that dynamic use of existentials is not supported.)

There's an existential in every top-level Haskell algebraic data type. First, I want to convince you that this is not all that strange of an idea. To do this, we look at Haskell's support for algebraic data types. Algebraic data types in Haskell are generative: each data type must be given a top-level declaration and is considered a distinct type from any other data type. Indeed, Haskell users use this generativity in conjunction with the ability to hide constructors to achieve data abstraction in Haskell. Although there is not actually an existential lurking about—generativity is not data abstraction—generativity is an essential part of data abstraction, and HM has no problem with this.

Top-level generativity corresponds to existentials that are unpacked at the top-level of a program (ala F-ing modules). We don't need existentials embedded inside our Haskell expressions to support the generativity of algebraic data types: all we need is the ability to pack an existential type at the top level, and then immediately unpack it into the top-level context. In fact, F-ing modules goes even further: existentials can always be lifted until they reach the top level of the program. Modular programming with applicative functors (the ML kind) can be encoded using top-level existentials which are immediately unpacked as they are defined.

The proposal. So let us suggest the following type system, Hindley-Milner with top-level existentials (where a* denotes zero to many type variables):

Term variables ∈ f, x, y, z
Type variables ∈ a, b, c

Programs
prog ::= let f = e in prog
       | seal (b*, f :: σ) = (τ*, e) in prog
       | {- -}

Type schemes (polytypes)
σ ::= ∀a*. τ

Expressions
e ::= x
    | \x -> e
    | e e

Monotypes
τ ::= a
    | τ -> τ

There is one new top-level binding form, seal. We can give it the following typing rule:

Γ ⊢ e :: τ₀[b* → τ*]
a* = free-vars(τ₀[b* → τ*])
Γ, b*, (f :: ∀a*. τ₀) ⊢ prog
---------------------------------------------
Γ ⊢ seal (b*, f :: ∀a*. τ₀) = (τ*, e) in prog

It also elaborates directly to System F with existentials:

seal (b*, f :: σ) = (τ*, e) in prog
  ===>
unpack <b*, f> = pack <τ*, e>_{∃b*. σ} in prog

A few observations:

  1. In conventional presentations of HM, let-bindings are allowed to be nested inside expressions (and are generalized to polytypes before being added to the context). Can we do something similar with seal? This should be possible, but the bound existential type variables must be propagated up.
  2. This leads to a second problem: naively, the order of quantifiers must be ∃b. ∀a. τ and not ∀a. ∃b. τ, because otherwise we cannot add the existential to the top-level context. However, there is a "skolemization" trick (c.f. Shao and F-ing modules) by which you can just make b a higher-kinded type variable which takes a as an argument, e.g., ∀a. ∃b. b is equivalent to ∃b'. ∀a. b' a. This trick could serve as the way to support inner seal bindings, but the encoding tends to be quite involved (as you must close over the entire environment.)
  3. This rule is not very useful for directly modeling ML modules, as a “module” is usually thought of as a record of polymorphic functions. Maybe you could generalize this rule to bind multiple polymorphic functions?

Conclusion. And that's as far as I've worked it out. I am hoping someone can tell me (1) who came up with this idea already, and (2) why it doesn't work.

7 Responses to “Hindley-Milner with top-level existentials”

  1. Tom Schrijvers says:

    Have you looked at the existentials in UHC?

  2. Here is the manual entry for UHC existentials: http://foswiki.cs.uu.nl/foswiki/Ehc/UhcUserDocumentation#A_3.6_Existential_types

    Looking carefully, it does seem like this is the strategy their taking, although they don’t let the user give a name to the fresh, unpacked type variables (I think they should; it would make their extension a bit more useful!) The fact that their existentials do not work with instances is also highly suggestive; the existentials here have a similar limitation (I think!).

    In fact, Atze remarks on how UHC’s treatment of existentials differs from the usual in his thesis: http://dspace.library.uu.nl/bitstream/handle/1874/7352/full.pdf?sequence=8
    So I think you have indeed found the citation I want! Unfortunately, there is not a very clear discussion of how HM-style inference should work (UHC tackles the problem of impredicative inference instead.)

  3. Dan Doel says:

    So, here’s the potential problem. I can’t say if it is one in what you’re thinking of, because you haven’t given enough detail. :)

    The unpacking of existential types with a case statement in GHC is not just due to type inference problems. It is necessary to reconcile semantic mismatches between the type and value levels. For instance, consider the following:

    data Vector (n :: Nat) a where …

    data EVector a = forall n. EV (Vector n a)

    cons :: a -> EVector a -> EVector a
    cons x xs = let EV v = xs in EV (cons’vector x v)

    ones :: EVector a
    ones = cons 1 ones

    So, we have finite sized vectors, and an existential wrapper around them. The vectors are finite because the type level does not admit infinite naturals (because of the occurs check). However, if we can unpack EVectors in a let (which is lazy), we can construct a wrapped, infinitely large Vector.

    The need to case analyze (and disallowing ~ in it) is what prevents this from happening. With GHC, you won’t be able to write a cons such that ones is not bottom. This is comparable to the problem of let on GADTs. It is not sound to let an irrefutable pattern match refine types, because you may not have a real proof (just a loop that you didn’t check for).

    So, perhaps your system works, but ‘seal’ has to be stricter than ‘let’. The big oddity is that this genuinely requires something like polymorphic seq (I think), because you no longer have a box around your existential. In fact, sealing with no variables is probably a way to write seq.

    As an aside, it’s not clear to me why GHC doesn’t allow something like ‘let !(EV v) = xs in …’. It doesn’t seem terribly hard to type check, so perhaps there’s just no analysis on whether a let pattern has been made strict enough to allow existential (or GADT) matching.

    This wouldn’t allow top-level existential unpacking, of course, because top level bang patterns don’t have a very good semantics (they require the program to immediately loop if a module is loaded, or something). But then, your seal construct has the same issue.

  4. Dan: That is an excellent point, and one that I missed because I’ve been in ML land for a while now.

    Another restriction which would let you keep top-level bindings lazy is to restrict the bound types to kinds for which it is not a problem if we claim to have something we don’t actually have (e.g. star.)

  5. Dan Doel says:

    It’s always a problem, though. Before we had a separate kind of naturals, people just used star for indexing their sized vectors. Infinite types are ruled out just like infinite natural numbers at the type level.

    In some sense, it’s never a real problem, I think. I don’t know of anything particularly wrong that the infinitely typed values would allow you to do (although, I haven’t actually tried to do something nefarious with them). But, it does mean that ‘exists e. T’ is not the union of all well typed terms of type T[e := U] for some type U. It is those, plus (possibly) some extra junk that cannot be given a valid type.

    The other way to fix it (in the sense that the semantics are more consistent) is to allow infinite types (and other type level things). I actually wonder how the dependent Haskell stuff is going to eventually address this, because it has the same issue. Having infinite types and other type-level things is a pretty significant change, though (and at least in a naive implementation, not really for the better).

  6. wren romano says:

    I’d like to point out that the issues around quantifiers are a lot more intricate / better explored than you let on here. First, rank-1 existentials are weaker than the rank-2 universals used to encode them à la System F. Second, the inference problem for rank-2 universals is in fact decidable. I give a bit more detail in this short post: http://winterkoninkje.dreamwidth.org/108092.html

  7. Thank you for that informative blog post. I know it’s not “mathematically interesting”, but do you know any papers/blog posts which lay out rank-2 inference is not robust in the same way HM is? (I assume, given the lack of citations, that rank-1 existentials is not something that has been studied heavily.)

Leave a Comment