Presented by Owen Landgren

Transcribed by Felix Klock

The core idea of partial evaluation:specialize a program with respect to static inputs

- π in Processes
- m, n in Vars
- p in Programs
- s
_{1}, ..., s_{m}in Static Inputs - r
_{1}, ..., r_{n}in Dynamic Inputs

Q (from Audience) : What is the difference between static and dynamic here? |

A: Static things are what we know at analysis time. Dynamic things are what we don't know untilwe actually run the program itself. |

So, lets assume that

π_{p}(s_{1}, ..., s_{m}, r_{1}, ..., r_{n}) = some new process (computation)

Well, if we apply the partial evaluatorαas follows:

α(πthis application of α yields a new π_{p}, s_{1}, ..., s_{m})

πand this will be the_{s}(r_{1}, ..., r_{n})

Clearly this has applications in compiler engineering, where you want to use information available at compile-time to produce an optimized program

But Partial Evaluation isn't just for optimization

Lets assume we had an Interpreter process, Interp(p,d)

Then we could apply α like so:

α(Interp,p) ==> Executablewe get a new process, Executable, where

Executable(d) = result of interpretation

This means that α(Interp, p) is the compiler of p

This is known as the first Futamura projection

Where can we go from here? Well

α(α, Interp) = α'(Interp,_)α' is a A COMPILER

This is known as the second Futamura projection

This sounds exciting. In particular, it means that we can generate compilers from interpretations with the same ease that we generate parsers from grammars; we would just use α instead of a parser generator

Syntax | ||||

| | ||||

| | ||||

\/ | ||||

Parser | <-- | ParserGen | <-- | Syntax |

| | ||||

| | ||||

\/ | ||||

Compiler | <-- | α | <-- | Interp |

| | ||||

| | (meta lang | |||

\/ | terms ) | |||

MetaCompiler | ||||

| | ||||

| | ||||

\/ | ||||

Object Code |

This is great! Except we don't have α yet...

Finally, Futamura poses some questions

- What are the criteria for compiler generation (What kinds of interpreters can we put in)?
- How much easier is it to describe an interpreter than a compiler?
- Can we make the generated compilers efficient?
- What meta language should we be using?

- Summarized history of Partial Evaluation; Futamura was first to publish, but others did discuss the idea first
- Provides a working Partial Evaluator
- Describes α(α, α) = Cogen = α'(α,_)
This is known as the third Futamura projection

α'(α,_)(Interp) = α''(Interp,_)(p)

Classifies two approaches

- Functional
- Operational

p = f(x,y)

p_{x}= g(y)

p in C = all computation steps

Q (from Audience) : What do you mean by computation steps? Are p traces, or individual operations? |

A: Each p is an atomic operation. |

μ in C' union C''

- C' = permissible operations
- C'' = suspensible operations

Elements of C shift between C' and C'' are you discharge operations

We're finding a fixed point of C''

Reuses the S-m-n theorem (the theorem is not Ershov's invention)

assume we have universal U of n arguments where

U^{n}(x_{1},..., x_{n}) : interpreter for its 1st argument

then there existsS

U^{m+n}(N, x_{1}, ..., x_{x+n}) = U^{n}(S^{n}_{m}(x_{1},..., x_{n})x_{n+1},..., x_{m+n})

- L = (D,P,V)
- L
_{1}= (D_{1}, P_{1}, V_{1}) - V : P x D -> D
- L
_{1}is main if the below lattice commutesV P x D ---> D /\ | T x τ | | κ | \/ P _{1}x D_{1}---> D _{1}V _{1}where

- T : P
_{1}-> P - τ : D
_{1}-> D - κ : D -> D
_{1}

Q (from Audience) : But that lattice doesn't commute. A: Sure it does. Q (from Audience) : No it doesn't. A: Yes it does. Q (from Audience) : What direction are those top and bottom arrows going in? A: To the right Q (from Audience) : It doesn't commute. A: Yes it does. You're thinking of the picture that category theorists draw, but that's just an example of commutativity in general. Here, we're saying that you should be able to compose T x τ with V and κ to get V _{1} - T : P
- then we can define
- ρ : D -> D x D (and it has to be a partitioning)
- G : P x D -> P
- V >= V o (G x e
_{D}) o (e_{p}x ρ)

- (e
_{p}x D)(p,d) = (p, ρ_{1}(d), ρ_{2}(d)) - (G x p
_{D})(p, ρ_{1}(d), ρ_{2}(d)) = (G(p, ρ_{1}(d), ρ_{2}(d)))

Apparently this means that we can partially evaluate any language of interest into some basic core language.

References MIX '85 (by Jones)

Defines Similix-2 (an attempt an an auto-projectable meta-language

Q (from Audience) : What is autoprojection? |

A: Its the 3rd Futamura projection, α(α, α) |

Features of the meta-language

- User-defined operations (statically defined procedures)
- Side-effects
- implementation also guarantees that the residual program will never duplicate or discard computations.

Performs Binding Time Analysis to determine which calls ar static vs dynamic

We can classify functions according to a two element lattice: S (static) <= D (dynamic)

Higher order functions are put into D, which eliminates P.E. benefits

Closure analysis: generate set sof all possible functions at a call site

- Classify primitves { executble, suspending }
- Classify procedures { executable, residual, dynamic }

Closure analysis on Higher Order Functions

Maintain two mappings:μ : Labels -> CLSets, and ρ : Variables -> CLSets

This somehow defines a lattice with elements D,S,Cl, and bot, where S <= D, Cl <= D, bot is less than everything, and S and Cl are incomparable.

An insight: the way you write the interpreter transforms the__output of the compiler__

Interpreter | Output |
---|---|

Direct Style | Direct Style |

CPS Style | CPS Style |

Call-by-Name | Call-by-Name |

Call-by-Value | Call-by-Value |

Q (from Audience) : The latter two points aren't really stylistic decisions; they're part of the definition of the language you're interpreting. |

A: Yep. |

Q (from Audience) : So why should we be surprised by the result for the last two cases? It would be more surprising if the output didn't behave using the same semantics for procedure calls. |

A: Fair enough |

This is just a survey paper, summarizing work up until that point.

New things have come up in the world of Partial Evalation, butthose results are in post-2000 papers.