The Programming Language Semantics Group


Students (partial listing)


Here are some people who drop by fairly regularly:



We have a weekly seminar on programming language issues. From the seminar home page, you can view recent talks and subscribe to the mailing list.


These are listed roughly in reverse chronological order. These are mostly my projects; see the home pages for Matthias and Will for more details on their projects.

Aspect-Oriented Programming

Aspect-oriented programming (AOP) is a promising recent technology for allowing adaptation of program units across module boundaries. We have developed a semantics for advice and dynamic join points in a subset of AspectJ. We would like to extend this model to cover more features, and also to abstract it to make it more amendable to mathematical analysis. We are interested in reconciling AOP with traditional notions of program modularity, and in developing program analyses for aspect-oriented programming languages. (Recent Papers)

The Programming Languages Team and PLT Scheme

PLT Scheme is an umbrella name for a family of implementations of the Scheme programming language. PLT is the group of people who produce PLT Scheme. DrScheme is the primary PLT Scheme implementation. TeachScheme! is a PLT project to turn Computing and Programming into an indispensable part of the liberal arts curriculum.

Analysis-Based Program Transformation

We are interested in understanding just how a program analysis justifies the program transformation that is typically based upon it. Many interesting optimizations depend on interprocedural analyses, which collect information about program units larger than a single procedure. Such optimizations are widespread in higher-order languages such as Scheme or ML, and play an important role in ordinary imperative languages as well. Despite decades of work on abstract interpretation and related analysis frameworks, there are still very few examples in which an analysis is actually used to prove the correctness of an associated transformation. We seek to understand this gap by doing a series of examples related to the compilation of higher-order ``almost-functional'' languages like Scheme or ML. We also seek to unify analysis-based transformations with the theory of transformations based on contextual equivalence, and to extend all of this theory to embrace programs that compute in parallel or distributed settings.

More recently, we have been studying the foundations of program analyses, providing new techniques for formulating and proving the soundness of various analyses. We seek to extend this work to context- and flow-dependent analyses, and to languages in which procedures may be invoked implicitly via events, callbacks, or aspects. (Recent Papers)

Aggregate Update Analysis

Destructive update insertion is a compiler optimization that transforms functional array updates into imperative updates whenever the analysis determines that the array being updated is dead following the update. The effectiveness of this optimization depends on the precision of the lifetime analysis; its practicality depends on the cost of the analysis. In 1993, Clinger and Sastry developed the first interprocedural destructive update analysis that runs in polynomial time. For typical programs, its performance is nearly linear. Wand and Clinger have developed a proof of correctness for this algorithm; it is a complex proof involving three analyses and several different semantics. (journal paper; shorter conference version).

We seek to evaluate the effectiveness of this algorithm both theoretically and experimentally, and to extend it to more powerful settings.

Generational Garbage Collection

This work seeks to fiand new algorithms for generational garbage collection and to explore, both experimentally and theoretically, the behavior of garbage collection algorithms. This work is supported by the National Science Foundation under grant CCR-9629801.

Almost-Denotational Proofs of Almost-Functional Languages

This work explores methods for proving the correctness of transformations of programs in languages like Scheme and ML. The method consists of giving the programs a denotational semantics in an operationally-based term model in which interaction is the basic observable, and showing that the transformation is meaning-preserving. This allows us to consider correctness for programs that interact with their environment without terminating, and also for transformations that change the internal store behavior of the program. We have been able to do CPS'd versions of the Meyer-Sieber examples, and to prove the correctness of the assignment-elimination transform for Scheme. [Recent Papers][Greg Sullivan's thesis]

Compiler Correctness Proofs

We have worked on a number of projects in this area. Our work of the early 80's established a flexible method for the correctness of semantics-based compilers. In recent years, we have attempted to make these techniques more applicable in several ways:

Our student Dino Oliva developed methods toward bridging the gap between typical semantics-directed abstract machines, which typically manipulate trees, and the standard structures of activation records, static and dynamic chains, etc. [Oliva's ftp directory] [Our LFP '92 paper]

As part of a cooperative project with the Mitre Corporation, we applied this technique to prove the correctness of a compiler for PreScheme, a dialect of Scheme tailored for systems programming, as part of a verified implementation of Scheme called VLisp. This research was published as a special double issue of the journal Lisp and Symbolic Computation [ftp directory] [Introduction to the report]

With Paul Steckler we developed an interprocedural flow analysis for compilers that are capable of designing a custom calling sequence for each procedure or method. The problem here is that, in higher-order and object-oriented languages, several different procedures may be called from the same site. Our flow analysis ensures that the specialized calling protocol used for each procedure matches the protocol used at each of its possible call sites. [TOPLAS paper]

With David Gladstein, we extended these results to prove the correctness of a simple compiler for a parallel language. [CONCUR '96 paper]

Complexity of Type Inference Problems

With O'Keefe, Palsberg and Tiuryn, we have considered a number of type inference problems and their complexity.

Object-Oriented Programming

With Palsberg and Lieberherr, we have considered a number of problems in this area, mostly concerned with typing. Most recently, with Johan Ovlinger, we proposed a domain-specific language for dealing with object traversals

Mitchell Wand
College of Computer Science, Northeastern University
360 Huntington Avenue #161CN,
Boston, MA 02115
Phone: (617) 373-2072 / Fax: (617) 373-5121

[College of Computer Science | Northeastern University]