Analysis-Based Program Transformation

Analysis-Based Program Transformation

We are interested in understanding just how a program analysis justifies the program transformation that is typically based upon it. Many interesting optimizations depend on interprocedural analyses, which collect information about program units larger than a single procedure. Such optimizations are widespread in higher-order languages such as Scheme or ML, and play an important role in ordinary imperative languages as well. Despite decades of work on abstract interpretation and related analysis frameworks, there are still very few examples in which an analysis is actually used to prove the correctness of an associated transformation. We seek to understand this gap by doing a series of examples related to the compilation of higher-order ``almost-functional'' languages like Scheme or ML. We have studied a sequence of analysis-based transformations, including offline partial evaluation, lightweight closure conversion, useless variable elimination, and destructive update introduction. We also seek to unify analysis-based transformations with the theory of transformations based on contextual equivalence, and to extend all of this theory to embrace programs that compute in parallel or distributed settings.

More recently, we have been studying the foundations of program analyses, providing new techniques for formulating and proving the soundness of various analyses. We seek to extend this work to context- and flow-dependent analyses, and to languages in which procedures may be invoked implicitly via events, callbacks, or aspects. (Recent Papers)

Aggregate Update Analysis

Destructive update insertion is a compiler optimization that transforms functional array updates into imperative updates whenever the analysis determines that the array being updated is dead following the update. The effectiveness of this optimization depends on the precision of the lifetime analysis; its practicality depends on the cost of the analysis. In 1993, Clinger and Sastry developed the first interprocedural destructive update analysis that runs in polynomial time. For typical programs, its performance is nearly linear. Wand and Clinger have developed a proof of correctness for this algorithm; it is a complex proof involving three analyses and several different semantics. (journal paper; shorter conference version).

We seek to evaluate the effectiveness of this algorithm both theoretically and experimentally, and to extend it to more powerful settings.

Almost-Denotational Proofs of Almost-Functional Languages

This work explores methods for proving the correctness of transformations of programs in languages like Scheme and ML. The method consists of giving the programs a denotational semantics in an operationally-based term model in which interaction is the basic observable, and showing that the transformation is meaning-preserving. This allows us to consider correctness for programs that interact with their environment without terminating, and also for transformations that change the internal store behavior of the program. We have been able to do CPS'd versions of the Meyer-Sieber examples, and to prove the correctness of the assignment-elimination transform for Scheme. [Recent Papers][Greg Sullivan's thesis]

Last modified: Thu Aug 19 15:40:29 EDT 2004
Report problems to Mitch Wand