Types, Effects, and Regions

Presentation by Felix Klock
Transcribed by Ryan Culpepper


Popek '77

Euclid. Their goal was writing down program proofs, where the machine helps. Aliasing makes doing this difficult, so their solution was to allocate values into different "collections". Introduces notion later called "regions".

Gifford '86

The goal this time is enabling compiler optimizations such as common subexpression elimination, automatic parallelization, etc. They define three kinds of "effects" (Read, Write, Allocate) on memory cells, and they define a ranking of subroutine effects based on the effects performed by its code: PURE < FUNC < OBS < PROC (corresponding to {}, {A}, {A,R}, and {A,R,W}, respectively).

Their language is based on a polymorphic lambda-calculus with explicit type annotations. Their type annotations had the form x : REF INT ! PURE (x is a reference cell holding an int, and evaluating the expression has no effect on memory) and (get x) : INT ! READ (fetching the value involves a read effect) and F : (SUBR (INT) OBS INT) ! PURE (F is a subroutine from int to int which can allocate or read memory, but the reference to F is pure) and (F 4) : INT ! OBS (applying F brings out its latent effect, OBS).
m : (poly (a b) (subr ((subr (a) _ b) [listof b]) _ [listof b]))
What goes in _? We want to be able to use map on subroutines arguments with different latent effects. Need abstraction...

Lucassen, Gifford '88 Polymorphic Effect Systems

Introduce abstraction over effects.
the old language: e::= ... | (tlambda (t) e) | (proj e t)  ;; where proj projects a polymorphic value onto a type
the new language e::= ... | (plambda (x :: K) e) | (proj e t)

In the new language, plambda creates a value polymorphic in the thing named by x. K is the kind of x (type, effect, ...).
map = (plambda (t1::type t2::type f::effect)
(lambda (p::(subr (t1) f t2) l:[listof t1])
(if ((proj null? t1) l)
(proj null t2)
<lots of proj's in here>)))

They also bring back regions.
effects phi ::= (alloc rho) | (read rho) | (write rho) | pure | (maxeff phi1 phi2)
regions rho ::= x | @red | ...
map : (poly (t)
(subr ((subr (t1) f t2) [listof t1,r])
(maxeff f (read r) (alloc r))
 [listof t2,r])
They introduced notion of effect masking. Private region (private r e) if effects aren't visible from outside (Matthias says: like prompt). These could be inferred.

The algebra of effects is ACUI (associative, commutative, unitary, idempotent).

Juvelot, Gifford '?

Introduce type+effect reconstruction (have to go together).
examples of unification on effects: Felix asks class to solve
Their reconstruction algorithm R produces a Type, Effect, Substitution, and Constraints. The Effect, Substitutions, and Constraints are then passed to ACUI solver.

They also take out regions, but leave in plambda.

Peter asks: Is there an idea of principle type in this system? No. There are subroutines for which the most general type+effect cannot be expressed--Felix shows an example.
exp = (lambda (f)
(lambda (g::(subr (bool) Write bool))
(if #t
(lambda (x) (begin (f x) (g x))))))
There is no principle type for f. It could have effect either pure or write. More discussion.

ACUI unification is in general NP-complete. However, the system devised here generates constraints of a certain form that can be solved in polynomial time, and always has a solution.

Talpin, Juvelot

Put back regions, get rid of plambda and first-class polymorphic values. Used let-bound polymorphism.
Users can no longer specify effects. All inferred. And now they base their system on inequivalences instead of equivalences, so they get constraints like {mu contains phi, ...} where mu is a simple unification variable, and phi can be structured. These constraints always have a solution, and in particular, they have a minimal solution that they can find.
(let ((v (build-vector 10 add1)
(f (lambda (x) (* a (+ b x))))
(vector-map f v))
System infers: v : [vectorof int,gamma], the whole expression has type [vectorof int,gamma2]. Since gamma does not appear in result expr, can stack-allocate v.

Talpin, Tofte '94, '97

They want to compile core (pure) ML, allocate everything "on the stack". That is, the program runs with a stack of regions, where you can allocate into any region on the stack (dynamically-sized), and when you pop a region off, you deallocate it all at once.

They use nondeterministic translation rules. It is perfectly legal to do everything in one region.
Quote: something like ".. Novice programmers often report that on their machines, memory is constant, not a variable." Do users have to figure out how to write programs to the memory manager? What are they trying to say?
{Matthias answers second point: Tofte influenced by Reynolds. Reynolds had been promoting call-by-name as giving stack allocation. Tofte wanted to answer, the same can be done for call-by-value.}