Polymorphic Type Inference with Effects

Presented by Vasileios Koutavas
Transcribed by Richard Cobbe

Milner introduced the first type inference algorithm W in 1978; this algorithm infers a set of type equations from the program text and solves the equations through unification. In this type system, identifiers bound by let are assigned polymorphic types by quantifying over all free type variables in the type (with some exceptions); identifiers bound as function arguments are always assigned monomorphic types. This has the consequences that let is no longer an abbreviation for procedure application, and that polymorphic procedures are no longer first-class values.

While Milner's inference system works well for purely functional languages, it fails when we add mutation: small examples demonstrate that the resulting type system is not sound. In 1985, Milner's student Damas adjusted the type system in an attempt to prevent these problems. In his system, each term is associated with a type and a set, Delta, containing the types of all values that can be allocated on the store during the execution of the term. For let-bound identifiers, we quantify over all free type variables, except those which appear in the expression's Delta set. As an example, the ref primitive has an empty Delta and therefore a polymorphic type: evaluating the term ref does not perform any store allocations. An application of ref, however, has a non-empty Delta set and therefore a monomorphic type under this system.

Tofte also attempted to fix Milner's inference system, using a slightly different mechanism. He classifies all type variables as either applicative or imperative, where imperative type variables may appear in the store. In this system, the inference algorithm quantifies over imperative type variables only when the expression on the right hand side of the let is a syntactic value (i.e., a variable or an abstraction). This system has some parallels with Damas's: in particular, syntactic values all have an empty Delta, and the types in an expression's Delta set correspond to the expression's imperative type variables. Tofte's system is better known today because detailed descriptions are more easily accessible, and the system itself has a somewhat simpler implementation. That said, it suffers from some of the same problems that affect Damas's system; in particular, applicative and imperative implementations of the same function will have different types, effectively exposing implementation details of the function in its public interface. In addition, this system makes curried functions somewhat awkward: as the application of a curried function is not a syntactic value, its result will never have a polymorphic type, even when such a type would be sound.

In a contemporary but independent effort, MacQueen also developed a type system that addresses the problems with Milner's original, but also treats curried functions with greater flexibility. In this system, each type variable has an associated numeric rank; this rank indicates the number of arguments to which a curried function can be applied before that particular type variable can no longer be quantified. When a curried function is applied to an argument, the ranks of all type variables in the resulting type are decremented by one, indicating that they are one fewer application away from the store. Ranks can also be infinite, indicating that it is always safe to quantify over the associated type variable. (Note that this use of ranks is not related to rank polymorphism.) This type system does solve the problems with curried functions, and SML/NJ used this inference algorithm for many years. However, in this type system, imperative and applicative implementations of the same function will still have different types.

Some years later, Leroy and Weis proposed a new type system in which imperative and applicative implementations of a function do in fact have the same type. They extend function types with labels and constraints: each arrow is labeled, and the type is annotated with constraints that indicate the type of free variables within the labeled function. For let-bound identifiers, they quantify over all free type variables, except those `dangerous' variables that appear in any of the function's constraints. (A `dangerous' type variable is one which appears inside a reference type; for example, in (a -> b) ref -> c, a and b are dangerous, but c is not.) In order to allow higher-order functions to work, they must also allow quantification over constraints, otherwise (e.g.) map would only be able to accept functions with no free variables as its first argument. With this, the type of a function is finally independent of its implementation. However, this type system is very complex, especially because the type checker must unify over the constraints as well as the types. In addition, there are some purely-functional expressions that are no longer typable in this system.

In 1995, Andrew Wright proposed a new modification to Milner's original type system that allowed different implementations of a function to have the same type, but without the complexity of some of the other solutions. In this system, we admit polymorphic types only on syntactic values: constants, variables, and lambda abstractions. All other expressions are forced to have monomorphic types. (This system is effectively equivalent to Tofte's, if we assume that all type variables are imperative.) This type system is fairly restrictive: some purely functional expressions that have polymorphic types under Milner's original system have monomorphic types here. While many such expressions can be fixed by eta-expanding the expression on the right side of the let, this strategy does not work for all such cases. However, Wright demonstrated empirically that this relative lack of flexibility is not, in practice, a significant problem: he modified the ML compiler to use his inference algorithm and attempted to compile several real-world applications. He found that most compiled with very few modifications. This system is now implemented in both the SML and OCAML compilers, as the community realized that a simpler type system is sufficient for real-world programs, even though it reduces the expressiveness of the functional core of the language.

In addition, Wright's value restriction reflected contemporary research elsewhere in the community. At about the same time, Cardelli, Harper, and Leroy were beginning to rethink ML, treating it as an explicitly-typed language, where the type inference system is a convenience provided by the compiler to the programmer. Consider an expression let f = \x. ... in ...: when the type reconstruction algorithm applies let polymorphism, it implicitly adds a Lambda (i.e., abstraction over types) before the \x. This has the effect of forcing the right-hand side of the let to be a value, whatever the shape of the expression itself is. This supports Wright's intuitive solution: in this model, there's always a value there anyway.

$Id: inference-summary.html,v 1.2 2004/02/03 18:14:24 cobbe Exp $