On this page:
Welcome
Classical Denotational Semantics
Classical Operational Semantics
Defining Macros
Hygienic Macros
Higher-Order Contracts
The Operational Semantics of Gradual Typing
Automatic Differentiation
Meetings
Hindley-Milner Type Inference
System F Type Systems:   From Theory to Practice
Ideas:   From Conception to Programming PER Semantics of Types
The PI Calculus
Session Types
Functional Reactive Programming
Tracing JIT Compilation Compiler Generation
Meetings
Programming Languages and Operating Systems
Array Programming Languages
Kanren Relational Programming
How (Not) to Benchmark
Image-Based Programming Systems
Logical Relations
Intermediate Compiler Forms for Functional Languages
Designing Multi-Language Systems
Refinement Types
The Evolution of Dependent Types
The Implementation of Depdenent-Type Proof Assistants
Visual Teaching Languages
Bibliography
8.0.0.6

Lectures

This page simulatenously serves as a repository for lecture abstracts and a presentation schedule. It will stay as a permanent record of this instance of HoPL.

Welcome

19 Jan 2021

I will give a brief presentation on the dawn of programming language history and how researchers began to tease out essential concepts—syntax, scope, types, semantics, pragmatics—and the first ideas that showed up within these categories.

We will also discuss why/how the lambda calculus quickly appeared as the central idea to study these concepts, culminating in the thoroughly under-appreciated dissertation of Morris [1968].

At the end, I will explain how this course works, its focus, and its organization.

Classical Denotational Semantics

Matthias Felleisen

As the Welcome explained and as you may recall from your undergraduate course on PPL, formulating a semantics for a programming language emerged as the first hard problem. People imagined that, like syntax, semantics could be specified at a rather high level and, like parsers, code generators could be generated from a semantics.

Scott’s denotational semantics [Stoy 1977] of the (typed and untyped) lambda calculus emerged as the first dominant approach. Because denotational semantics seems to leverage ordinary mathematical ideas (algebra, geometry, topology, etc.), researchers hoped that tools from these areas could be used to analyze program behavior and that these analyses could then play a role in code generation and optimization. Sadly, this research direction ran into a rather unsolvable problems, often now called the “full abstraction” problem [Milner 1977; Plotkin 1977b].

While (classical) denotational semantics has disappeared (in the US), modern operational semantics directly draws on its ideas and will continue to do so in the future.

Classical Operational Semantics

Matthias Felleisen

The very first efforts in operational semantics focused on two different directions: interpreters [McCarthy 1960] and abstract machines [Landin 1964].

Compared to denotational semantics, people recognized early on that both approaches to operational semantics feel ad hoc and seem to make it difficult to study the meaning of programs systematically. Reynolds [1972]’s devastating critique of operational semantics—he uses the phrase meta-interpreter, but it equally applies to related works too—looked like a death sentence.

In parallel, though, Plotkin [1977a; 1981] developed two more frameworks for operational semantics, both of which play a significant role nowadays: lambda calculus as a semantics and so-called structural operational semantics. Clement et al. [1985]’s development of the closely related natural semantics demonstrated that this idea could be useful for compiler-generation frameworks.

Note These ideas are often covered to varying degrees in iPPL (7400). This presentation is from my perspective, recalling my own days as a second-year PhD student.

Defining Macros

Jared Gentner

Some language families do not respect the idea of fixed syntax, scope, semantics, and so on. The Lisp family is one of them.

From almost the very beginning, Lisp languages offer programmers mechanisms for changing almost every aspect of the language. The key mechanism is a macro—a function from (concrete) syntax tree to syntax tree—which a programmer can add to the front end of the compiler. Programmers can use those to eliminate syntactic pattern and even finely integrated build domain-specific languages.

The challenge is to write these syntax-to-syntax functions effectively. Beginning in the 1980s, researchers paid attention to this problem and took four significant steps:

Hygienic Macros

Michael Ballantyne

Parallel to the development of macro-definition systems, researchers also worked on understanding how syntax-to-syntax functions “mess” with the lexical scope of a program. Concretely, using macro may inadvertently capture identifiers in a way that violates a natural understanding of the surface syntax and its binding rules. The resulting bugs are rather insidious.

Kohlbecker et al. [1986] pointed out the problem and offered a first, simplistic solution, drawing on Barendregt’s tradition of treating the lambda calculus as a system of terms “moded out” under alpha equivalence (“the hygiene condition”). In this context, alpha renaming—known to “younglings” as identifier-rename refactoring—means picking a different concrete representative for the same equivalence class. Kohlbecker et al. describe an algorithm that properly preserves the apparent scope uniformly for all programs: the hygienic macro expansion algorithm.

Following this tradition, Dybvig et al. [1993] generalizes this expansion algorithm and improves its performance in significant ways. Their algorithm was the gold standard for nearly three decades.

In 2016, Flatt [2016] presented an entirely different algorithm, which he implemented for Racket. The algorithm was quickly used to implement macros for JavaScript.

Higher-Order Contracts

Cameron Moy

In the late 1960s, software engineering researchers coined the phrase “software crisis.” With computational power accelerating at an exponential rate, everyone expected programmers’ ability to deliver complex software to improve substantially, but alas it didn’t. So researchers began to ponder how to deal with this problem.

McIlroy [1968] is the first to propose a component marketplace that could supply high-quality, generic—as in interchangeable— and configurable modules, for use in many different systems. Parnas [1972] recognizes that, to be practical, such components would need to be accompanied by a formal interface specification.

The Eiffel programming language represents the first large-scale attempt to realize these aspirations. Meyer [1992b]’s contract From a software engineering perspective, this system encourages programmers to replace defensive programming—over-burdening method definitions with extensive checks—with concise interface contracts, which is where such checks belong for the client programmer. system for Eiffel allows programmers to express pre-conditions that a caller must satisfy in a method signature and post-conditions that the result is guaranteed to satisfy. If a method’s pre-condition fails, it is the caller’s fault; if the post-condition fails, the contract system points to the callee as the potential source of error. In fact, in Meyer [1992a]’s eyes, the contract system is the underpinning of his entire software design philosophy: “design by contract.”

Some ten years later Findler and Felleisen [2001] point out that contracts in Eiffel violate the Liskov substitution principle and, as a result, occasionally point to the wrong component as the faulty one. To rectify this flaw, they propose a “monitoring” approach that goes beyond checking just the pre and post condition.

By this time, programming languages also begin to include higher-order elements, such as Scheme’s lambda or Python’s functions on classes. Correctly handling these now-common programming language features with the simplistic contract system of Eiffel, however, proves impossible. In response, Findler and Felleisen [2002] extend the contract system to higher-order functions and develop a “blame assignment” system, setting off a flurry of research into the subject.

Subsequent work turns to many aspects and applications of higher-order contracts: semantics (what does it mean for a component to satisfy its contracts, what is “blame”), pragmatics (does blame assignment help programmers), implementation technology (how can contracts effectively and efficiently protect mutable objects), and more..The idea also finds its way into gradual typing via the system’s re-use in the construction of Typed Scheme. The most notable development investigated the correctness and completeness of blame assignment—resulting in the complete monitoring property. It demands that a contract system assigns blame correctly and monitors all channels of communication between modules [Dimoulas et al. 2012].

The Operational Semantics of Gradual Typing

Olek Gierczak

Gradual Typing seeks to combine static with dynamic typing, while maintaining as many benefits as possible, such as the program analysis benefits from static and legacy code support from dynamic. The prehistory of Gradual Typing is work surrounding interoperability between statically and dynamically typed languages, with one notable example being interoperability between Java and Scheme Gray et al. [2005]. Similar to the emphasis on how values flow between language boundaries in interoperability, Research on gradually typed systems emphasizes how a value is checked at the boundary between the statically and the dynamically typed portion.

The original core formalizations of Gradually Typed systems introduce the Natural semantics for checking values at boundaries  [Siek and Taha 2006; Tobin-Hochstadt and Felleisen 2006]. Alternatives are due to Vitousek et al. [2014] (also see  [Vitousek et al. 2017]), who propose “transient checks” and industrial systems, such as TypeScript which “erase” boundaries [Bierman et al. 2014].

For Natural, checks are inserted and propagated so that when a value needs to have a certain type, it is either checked to be that type fully, or wrapped in a delayed-checking monitor, similar to higher-order contracts. For Transient, checks are added in the translation that only ensure first-order properties, meaning for example when a function type is needed, transient checks the value is a function of any type. For Erasure, as the name suggests, no checks are added in the translation. The type is “erased”, meaning values flow without restrictions between different regions of code.

Given these varying semantics, researchers sought to compare them and justify their particular design choices with formal properties of the languages. In general, Erasure either doesn’t satisfy the properties or trivially does. The most familiar property is type soundness. In the case of Natural, the values are exactly the expected type, but in Transient, the value has only a matching top level constructor because of first-order checks don’t check in depth [Greenman and Felleisen 2018].

More recently, Greenman et al. [2019] show that Natural satisfies interesting properties that Transient fails. Technically, they present and prove a version of complete monitoring for gradually typed languages. Originally developed for the contract world, complete monitoring demands that whenever a value goes through a type boundary it is possible to check it completely for all possible uses. They show that Natural does satisfy this property while Transient does not.

As an alternative approach to comparing these semantics of gradual typing, Chung et al. [2018] give three example programs in a common framework—Kafka—and use them to demonstrate differences between the three semantics. These litmus tests exercise the occurrence of unrelated types, incompatible types, and crossing through multiple boundaries. They show all three run in Erasure, the unrelated types example signals a problem in Transient as well as Natural, and the crossing multiple boundaries example signals a violation in Natural only.

Automatic Differentiation

Daniel Melcer

Automatic differentiation allows a programmer to calculate the derivative of a function or pieces of the Jacobian matrix when a function has several inputs or outputs.

Wengert [1964] presents the first (WesternBefore the “iron curtain” came down, applied mathematicians and theoretical computer scientists in the West were unaware of advances in the East, such as Beda et al. [1959]’s work on the forward-mode automated differentiation.) description of automatic differentiation using dual numbers, that is, structures that track the value of a function for a given value along with its derivative at that same value. This approach is later refined to use structs instead of size-2 vectors Rall [1984].

This dual-number approach is effective for calculation when there are many output variables but few input parameters. But, it is run O(n) times with respect to the number of input variables, imposing an inefficiency as the number of inputs grows. Speelpenning [1980] introduces (an implementation of) the "reverse mode," which keeps a log of computations. Using this log it is possible to back-propagates the derivative of all intermediate values with respect to a single output value, eventually finding the gradient of the entire input. Speelpennig’s tool is a precompiler, transforming FORTRAN source code to calculate derivatives. Source-transformation AD tools continued to improve—covering more of the full language—resulting in ADIFOR, as the most popular tool for a long time [Bischof et al. 1992].

In 2007, Siskind and Pearlmutter [2007b,a] reported on a new, reification-based technique for AD in the world of higher-order languages. But also see Manzyuk et al. [2019] for unsolved problems in this contexts.

Reverse mode has higher memory requirements than forward mode. Griewank [1992] partially mitigates this issue through check-pointing (while increasing the amount of required computation). These source transformation tools cannot handle every program, notably failing upon encountering a recursive function. Griewank et al. [1996] finally introduces a C++ runtime library that handles the full reverse mode, avoiding the many pitfalls of source transformations.

Meetings

16 Feb 2021

I will meet with everyone 1-1 on this day to discuss your progress for your first topic. Here is the schedule:

time

student

09:50am

Helena

10:00am

Josh

10:10am

Ryan

10:20am

Nathan

10:30am

Mitch

10:40am

Andrew

10:50am

Donald

11:00am

Emily

11:10am

Lucy

11:20am

Ankit

11:30am

Olek

11:40am

Jared

We will use the normal class link for these meetings.

Each meeting will discuss the following items in order:
  • your reading progress

  • your abstract

  • your note taking

  • a “dress rehearsal”

  • if you are a PhD student, your second topic; for others; an optional second topic.

MS/undergraduate students who have delivered a lecture are excused.

Hindley-Milner Type Inference

Josh Goldman

The goal of type inference is to reduce programmer’s workload. Generally speaking, type inference is an algorithm that restores type specification of variables that the programmer leaves out. Every such omitted type specification is a variable in a system of constraints, determined by the surrounding program context. As with ordinary mathematical equation systems, a type inference algorithm may find no solutions, an exact solution for all variables, or an infinite number of them. In this last case, the algorithm assigns many type to a function, say, and this function can be used at any of these types.

The idea was pre-discovered by an office mate of Milner, Hindley, ten years before, which is why it is known as Hindley-Milner type inference. Milner [1978] was the first to recognize the practical value of such an algorithm in the context of programming languages. He simultaneously injected the idea of a “well-typed program,” as characterized by Semantic and Syntactic Soundness Theorems. With his student Luis Damas, Milner later constructed Algorithm W to find the most general type for any expression in a functional subset of ML Damas and Milner [1982]. Similar algorithms had been in PL folklore for some years.

Hindley-Milner type inference turned out less optimal than thought, both in terms of performance and pragmatics. First, programmers initially thought of W as a linear-time or at least polynomial-time algorithm until Mairson [1989] took reduced typability in ML to deterministic exponential time. Second, people expected that the Hindley-Milner form of type inference could be extended to a broad range of language features. In 1993, Henglein [1993] reduced type inference for the Milner-Mycroft variant to the semi-unification, a problem known to be undecidable. Due to the nature of both limitations, researchers and programmers have questioned the status of type inference for the past two decades.

System F Type Systems: From Theory to Practice

Ryan Guo

In simply typed lambda calculus (STLC), functions such as the identity function must be re-defined for arguments with different type, yet these functions all share the same behavior. This duplication becomes more severe as the programs become more complex.

An extension of the STLC, due to Reynolds [1974], introduces abstraction over types, i.e., parametric polymorphic functions. This solves the aforementioned duplication issue. Reynolds proves a Representation Theorem for this system, which shows that different representations of primitive types will not affect the behavior of a typed program in this extension. This extension is more known as System F, named by Girard who independently discovered the same system in the context of proof theory in logic.

Cardelli and Wegner [1985] propose to use the “Fun” language for studying extensions of this system. It brings together the idea of parametric polymorphism from System F, existential quantification, and bounded quantification. However, bounded quantification cannot model objects that are elements of recursively-defined types. In response, Canning et al. [1989] introduced F-bounded quantification.

Modern developers now rely on some of these ideas in the context mainstream programming languages, a development that took three steps. The first is a language dubbed Pizza, due to Odersky and Wadler [1997]. The language runs on the JVM and syntactically extends Java, making use of F-bounded quantification and existential types to implement type-sound generic classes and methods. Pizza comes with abstract data types and higher order functions, demonstrating their compatibility with Java. But, the soundness constraints on these generics disallow linking Pizza code with certain Java libraries—raising a serious compatibility issue. To fix this flaw, Bracha et al. [1998] propose a small change, dubbed the GJ language. It is a conservative extension of Java that adds only generics; GJ introduces raw types to handle backward compatibility and retro-fitting to handle forward compatibility.

Ideas: From Conception to Programming PER Semantics of Types

Nathaniel Yazdani

abstract and citations needed

The PI Calculus

Mitch Gamburg

The PI-calculus is the most well-known and most well-developed formal system—among dozens—for studying concurrent and communicating software. Its immediate predecessor, the Calculus of Communicating Systems, describes a software component, called an agent, as its interactions with an external observer, which may also be an agent [Milner 1980]. All interactions take place as message exchanges via dedicated channels. Extended CCS  [Engberg and Nielsen 1986] allows agents to send channels over channels. As a result, agents may dynamically change their link-based connections. The PI-calculus itself is the result of removing all distinctions between channels and other values (such as numbers)  [Milner et al. 1992].

While the simplification of the value universe makes the PI-calculus more expressive than CCS and simplifies it at the same time, it also injects a sense of chaos and confusion. As Milner [1991] argues, the best way to inject discipline into such a world is to add a form of type system, which results in the polyadic PI-calculus. To facilitate communication in this context, the calculus also comes with tuple values.

Among its many variants, Pierce and Sangiorgi [1994]’s interface-oriented PI-calculus stands out. It extends Milner’s sorting discipline with specification of a channel’s interface: a channel could be read-only, write-only, or both read and write. By imposing a linearity constraint on such interfaces, Kobayashi et al. [1999] lay the foundation for behavioral types—also known as session types—for the agents of the PI-calculus.

Session Types

Andrew Wagner

5 Mar 2021

The development of process calculi in the 1970s and 1980s triggered the search for type systems to quell deadlocks and races from process-oriented programs.

One promising approach, due to Honda [1993], identifies a core set of types for interactions between two processes. Central to ensuring communication safety in Honda’s theory are the properties of linearity and duality. Takeuchi et al. [1994] builds upon this set of types but also introduces the notion of a session, which is analogous to a type abstraction from ML. Rounding out the formative phase of the theory, Honda et al. [1998] add recursion and also channel delegation, a feature inspired by concurrent object-oriented programming.

Generalizing from two to many parties poses a daunting challenge, because duality is a linchpin of the theory. To accomplish it, Honda et al. [2008] replaces duality with a new notion of consistency, called coherence. It is defined over global types that jointly describe the multiparty interaction. To prove coherence, one projects global types onto local types, which are much like the binary session types of old.

Session types abound in the literature, but there is hardly a canonical system. One approach to find a canonical system equates session types with propositions [Caires and Pfenning 2010; Wadler 2012], developing a correspondence with logic in the spirit of Curry and Howard. Another approach encodes session-typed processes [Dardhaa et al. 2017], using a variant of the linear π-calculus from the previous lecture.

Functional Reactive Programming

Donald Pinckney

9 Mar 2021

As computer hardware shrank in size, software controlled reactive systems became a center of focus. In response, a flurry of research in the 1980s produced a new breed of languages, so called “dataflow’ languages [Benveniste et al. 1991; Berry and Gonthier 1991; Halbwachs et al. 1991; Wadge and Ashcroft 1985]. Behind their design was the idea of writing down equations to specify how input signal data flows computes output—and to generate the program from these equations.Esterel is alive and well. Airbus uses it to design some control software for its aircraft.

In search of a GUI programming model for purely functional languages, Elliott and Hudak [1997] recognized the similarity of the problem to reactive control systems. Their adaptation of the idiom came to be known as Functional Reactive Programming (FRP). Since they were working with Haskell, FRP differed from the original dataflow languages in two ways: first, they exploited Haskell’s call-by-need semantics, and second, they added continuous time, which allowed for a natural expression of animations. But time is really based on sampling, which poses a new semantic problem.

Wan and Hudak [2000] explored approximate sampling in FRP, specifically conditions for getting a sampling-based semantics to agree with the continuous semantics. Their research uncovered pathological FRP terms, including Zeno’s paradox. Wan et al. [2001] then identified a pathological class of terms needing unbounded space. The understanding of these classes of pathological terms provided most of the motivations for foundational FRP research going forward. Over the next two decades, a number of researchers [Krishnaswami et al. 2012; Ploeg and Claessen 2015] used a variety of techniques to eliminate these pathological classes of programs from FRP.

Simultaneously to the notion of FRP, Hughes [1991] developed the notion of Arrows, a restricted computational model consisting of reified functions joined together by combinators. Nilsson et al. [2002] made use of Hughes’s Arrows to present an “arrowized” version of FRP. This revised semantics allowed for dynamically switching dataflow graph structure without losing space bounds.

Early work on FRP rested on the implicit assumption that a call-by-need semantics was needed to program in this manner. Cooper and Krishnamurthi [2006] showed the independence of FRP from call-by-need and in the process exposed the fundamental implementation principles of graph-based re-computation in a higher-order world. This work inspired implementation of Flapjax [Meyerovich et al. 2009] and Elm [Czaplicki and Chong 2013], a practical FRP language for building Web pages.

Tracing JIT Compilation Compiler Generation

Emily Herbert

12 Mar 2021

abstract and citations needed

Meetings

16 Mar 2021

I will meet with every PhD student 1-1 on this day to discuss the reading lisrt for your second topic. Here is the schedule:

time

student

09:50am

Cameron

10:00am

Olek

10:10am

Jared

10:20am

Nathan

10:40am

Andrew

10:50am

Donald

11:00am

Emily

11:10am

Lucy

11:20am

Ankit

The primary goal of this meeting is to go over your reading list for your topic, assess your understanding of the major steps, and to prune it if needed.

Programming Languages and Operating Systems

Lucy Amidon

19 Mar 2021

abstract and citations needed

Array Programming Languages

Ankit Kumar

23 Mar 2021

In 1957, Kenneth E. Iverson developed a mathematical notation to describe numerical processing for his courses at Harvard. The notation was implemented as the APL language during his time at IBM in 1965 [Iverson 1962].

APL, mostly known now for its distinct character set for mathematical operations, supports primitives for manipulating matrices of any rank, so-called rank polymorphism, also known as “implicit scaling.” APL operators deal with large multi-dimensional data as a unit, which allows the language to get rid of explicit control flow structures such as iteration or recursion (for many classes of programs). They also eliminate the conventional von Neunmann view of processing one word of data at a time.

Rank polymorphism relies on an ad-hoc semantics and a complex implementation. To eliminate some of this complexity, only APL’s built-in operators scale. But, with the introduction of nested arrays, it becomes difficult to control the grain of scaling. Two research projects address this problem in two different ways:
  • Thatte [1991] introduces a coercion-based semantics for implicit scaling, based on simple structural subtyping rules. This approach pragmatically eliminates ambiguities in a large class of programs.

  • Jay [1995] develops shape polymorphism and its semantics. This framework allows the parameterization of user-defined functions over “shapes,” but the shape of the result may depend only on the shape of the input(s) not their element type. While this framework supports the detection of shape errors at compile time, the class of programs benefitting from this technique is rather limited.

Modern Haskell accommodates various forms of shape polymorphism for arrays through the use of type classes and type families [Keller et al. 2010]. However, this implementation retains Barry’s restriction on shape polymorphism.

Remora [Slepak et al. 2014] is the first theoretical framework to overcome some of these limitations. In particular, it is expressive enough to describe array computations whose dimensions cannot be determined statically.

Kanren Relational Programming

Helena Dworak

26 Mar 2021

abstract and citations needed

How (Not) to Benchmark

Cameron Moy

30 Mar 2021

abstract and citations needed

Image-Based Programming Systems

Michael Ballantyne

2 Apr 2021

abstract and citations needed

Logical Relations

Olek Gierczak

6 Apr 2021

abstract and citations needed

Intermediate Compiler Forms for Functional Languages

Jared Gentner

9 Apr 2021

abstract and citations needed

Designing Multi-Language Systems

Andrew Wagner

13 Apr 2021

abstract and citations needed

Refinement Types

Donald Pinckney

16 Apr 2021

abstract and citations needed

The Evolution of Dependent Types

Lucy Amidon

20 Apr 2021

abstract and citations needed

The Implementation of Depdenent-Type Proof Assistants

Ankit Kumar

23 Apr 2021

abstract and citations needed

Visual Teaching Languages

Emily Herbert

27 Apr 2021

abstract and citations needed

Bibliography

E. Beda, L. N. Korolev, N. V. Sukkikh, and T. S. Frolova. Programs For Automatic Differentiation For The Machine BESM. Institute for Precise Mechanics and Computation Techniques, Academy of Science, [Ostrovskii1971UdB], 1959.

Albert Benveniste, PaulLe Guernic, and Christian Jacquemot. Synchronous Programming with Events and Relations: the SIGNAL Language and Its Semantics. Science of Programming 16(2), pp. 103–149, 1991. https://www.sciencedirect.com/science/article/pii/016764239190001E

Gérard Berry and Georges Gonthier. The Esterel Synchronous Programming Language: Design, Semantics, and Implementation. Science of Programming 19(1), pp. 87–152, 1991. https://www.sciencedirect.com/science/article/pii/016764239290005V

Gavin M. Bierman, Martín Abadi, and Mads Torgersen. Understanding TypeScript. In Proc. European Symposium on on Programming, pp. 257–281, 2014. https://dl.acm.org/doi/10.1007/978-3-662-44202-9_11

Christian Bischof, Alan Carle, George Corliss, Andreas Griewank, and Paul Hovland. ADIFOR–Generating Derivative Codes from Fortran Programs. Scientific Programming 1(1), pp. 11–29, 1992. https://www.hindawi.com/journals/sp/1992/717832/

Gilad Bracha, Martin Odersky, David Stoutamire, and Philip Wadler. Making the Future Safe for the Past: Adding Genericity to the Java Programming Language. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 183–200, 1998. https://doi.org/10.1145/286936.286957

L. Caires and F. Pfenning. Session Types as Intuitionistic Linear Propositions. In Proc. International Conference on Concurrency Theory, pp. 222–236, 2010. https://www.cs.cmu.edu/~fp/papers/concur10.pdf

Peter Canning, William Cook, Walter Hill, Walter Olthoff, and John C. Mitchell. F-bounded Polymorphism for Object-Oriented Programming. In Proc. ACM International Conference on Functional Programming Languages and Computer Architecture, pp. 273–280, 1989. https://doi.org/10.1145/99370.99392

Luca Cardelli and Peter Wegner. On understanding types, data abstraction, and polymorphism. ACM Computing Surveys 17(4), pp. 471–523, 1985. https://doi.org/10.1145/6041.6042

Benjamin Chung, Paley Li, Francesco Zappa Nardelli, and Jan Vitek. KafKa: Gradual Typing for Objects. In Proc. European Conference on Object-Oriented Programming, 2018. https://hal.inria.fr/hal-01882148/document

Dominique Clement, Joelle Despeyroux, Thierry Despeyroux, Laurent Hascoet, and Gilles Kahn. Natural semantics on the computer. INRIA Sophia-Antipolis, RR-0416 00076140, 1985. https://hal.inria.fr/inria-00076140/document

William D. Clinger, Jonathan A. Rees, H. Abelson, R Kent Dybvig, Christopher Thomas Haynes, Guillermo Juan Rozas, Norman I. Adams, Daniel Paul Friedman, Eugene E. Kohlbecker, Guy L. Steele Jr., David H Bartley, Robert H Halstead, D. Oxley, Gerald Jay Sussman, G. Brooks, Chris Hanson, Kent M. Pitman, and Mitch Wand. The revised4 report on SCHEME: a dialect of LISP. ACM SIGPLAN Lisp Pointers IV(3), pp. 1–55, 1991. https://dl.acm.org/doi/10.1145/382130.382133

Gregory H. Cooper and Shriram Krishnamurthi. Embedding Dynamic Dataflow in a Call-by-Value Language. In Proc. European Symposium on on Programming, pp. 294–308, 2006. https://link.springer.com/chapter/10.1007/11693024_20

Ryan Culpepper and Matthias Felleisen. Fortifying macros. In Proc. ACM International Conference on Functional Programming, pp. 235–246, 2010. https://www2.ccs.neu.edu/racket/pubs/#c-jfp12

Evan Czaplicki and Stephen Chong. Asynchronous Functional Reactive Programming for Guis. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 1–12, 2013. https://dl.acm.org/doi/10.1145/2499370.2462161

Luis Damas and Robin Milner. Principal Type-Schemes for Functional Programs. In Proc. ACM Symposium on Principles of Programming Languages, pp. 207–212, 1982. https://dl.acm.org/doi/10.1145/582153.582176

O. Dardhaa, E. Giachinob, and D. Sangiorgib. Session Types Revisited. Information and Computation 256, 2017. https://www.sciencedirect.com/science/article/pii/S0890540117300962

Christos Dimoulas, Sam Tobin-Hochstadt, and Matthias Felleisen. Complete Monitors for Behavioral Contracts. In Proc. European Symposium on on Programming, pp. 214–233, 2012.

Kent R. Dybvig. Writing hygienic macros in Scheme with syntax-case. Indiana University, Department of Computer Science, TR356, 1992. https://help.luddy.indiana.edu/techreports/TRNNN.cgi?trnum=TR356

Kent R. Dybvig, Robert Hieb, and Carl Bruggeman. Syntactic abstraction in Scheme. LISP and Symbolic Computation 5, pp. 295–326, 1993. https://link.springer.com/article/10.1007/BF01806308

Conal Elliott and Paul Hudak. Functional Reactive Animation. In Proc. ACM International Conference on Functional Programming, pp. 263–273, 1997. https://dl.acm.org/doi/10.1145/258948.258973

Uffe Engberg and Mogens Nielsen. A Calculus of Communicating Systems with Label-Passing. In Proc. Proof, Language, and Interaction : Essays in Honour of Robin Milner. MIT Press., pp. 599–622, 1986. https://pdfs.semanticscholar.org/802f/a6e557acc721d9dc04a2fdded5bf75240e8c.pdf also available as DAIMI Tech. Rpt. 208, University of Aarhus

Robert B. Findler and Matthias Felleisen. Contracts for Higher-Order Functions. In Proc. ACM International Conference on Functional Programming, pp. 48–59, 2002.

Robert Bruce Findler and Matthias Felleisen. Contract Soundness for Object-Oriented Languages. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 1–15, 2001. https://doi.org/10.1145/504282.504283

Matthew Flatt. Bindings as sets of scopes. In Proc. ACM Symposium on Principles of Programming Languages, pp. 705–717, 2016. https://dl.acm.org/doi/10.1145/2837614.2837620

Kathryn E. Gray, Robert B. Findler, and Matthew Flatt. Fine-Grained Interoperability through Contracts and Mirrors. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 231–245, 2005.

Ben Greenman and Matthias Felleisen. A Spectrum of Type Soundness and Performance. Proceedings of the ACM on Programming Languages (ICFP) 2(71), pp. 1–32, 2018.

Ben Greenman, Matthias Felleisen, and Christos Dimoulas. Complete Monitors for Gradual Types. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, 2019.

Andreas Griewank. Achieving logarithmic growth of temporal and spatial complexity in reverse automatic differentiation. Optimization Methods and Software 1(1), pp. 35–54, 1992. https://doi.org/10.1080/10556789208805505

Andreas Griewank, David Juedes, and Jean Utke. Algorithm 755: {ADOL}-C: a package for the automatic differentiation of algorithms written in C/C++. Transactions on Mathematical Software 22(2), pp. 131–167, 1996. https://doi.org/10.1145/229473.229474

N. Halbwachs, P. Caspi, P. Raymond, and D. Pilaud. The Synchronous Data Flow Programming Language LUSTRE. In Proc. Proceedings of the IEEE volume 79, pp. 1305–1320, 1991. https://ieeexplore.ieee.org/document/97300

Fritz Henglein. Type Inference with Polymorphic Recursion. ACM Transactions on Programming Languages and Systems 15(2), pp. 253–289, 1993. https://dl.acm.org/doi/10.1145/169701.169692

K. Honda. Types for Dyadic Interaction. In Proc. International Conference on Concurrency Theory volume 715, pp. 509–523, 1993. https://link.springer.com/chapter/10.1007/3-540-57208-2_35

K. Honda, N. Yoshida, and M. Carbone. Multiparty Asynchronous Session Types. In Proc. ACM Symposium on Principles of Programming Languages, pp. 273–284, 2008. https://www.doc.ic.ac.uk/~yoshida/multiparty/multiparty.pdf

Kohei Honda, Vasco T. Vasconcelos, and Makoto Kubo. Language Primitives and Type Discipline for Structured Communication-Based Programming. In Proc. European Symposium on on Programming, pp. 122–138, 1998. https://link.springer.com/chapter/10.1007/BFb0053567

John Hughes. Generalising Monads to Arrows. Science of Programming 37(1), pp. 67–111, 1991. https://www.sciencedirect.com/science/article/pii/S0167642399000234

Kenneth E. Iverson. A Programming Language. John Wiley & Sons, 1962. https://www.jsoftware.com/papers/APL.htm

C. Barry Jay. A Semantics for Shape. Science of Computer Programming 25(2), pp. 251–283, 1995. https://www.sciencedirect.com/science/article/pii/0167642395000151/

G. Keller, M. Chakravarty, R. Leshchinskiy, S. Peyton Jones, and B. Lippmeier. Regular, Shape-Polymorphic, Parallel Arrays in Haskell. In Proc. ACM International Conference on Functional Programming, pp. 261–272, 2010. https://dl.acm.org/doi/abs/10.1145/1863543.1863582

Naoki Kobayashi, Benjamin Pierce, and David Turner. Linearity and the PI-Calculus. ACM Transactions on Programming Languages and Systems 21(5), pp. 914–947, 1999. https://dl.acm.org/doi/pdf/10.1145/330249.330251

Eugene E. Kohlbecker, Daniel P. Friedman, Matthias Felleisen, and Bruce D. Duba. Hygienic macro expansion. In Proc. LISP and Functional Programming, pp. 151–161, 1986. https://dl.acm.org/doi/10.1145/319838.319859

Eugene E. Kohlbecker and Mitch Wand. Macros by example. In Proc. ACM Symposium on Principles of Programming Languages, pp. 77–84, 1987. https://dl.acm.org/doi/10.1145/41625.41632

Neelakantan R. Krishnaswami, Nick Benton, and Jan Hoffmann. Higher-Order Functional Reactive Programming in Bounded Space. In Proc. ACM Symposium on Principles of Programming Languages, pp. 45–58, 2012. https://dl.acm.org/doi/10.1145/2103656.2103665

Peter J. Landin. The mechanical evaluation of expressions. The Computer Journal 6(4), pp. 308–320, 1964. https://academic.oup.com/comjnl/article/6/4/308/375725

Harry Mairson. Deciding ML Typability is Complete for Deterministic Exponential Time. In Proc. ACM Symposium on Principles of Programming Languages, pp. 382–401, 1989. https://dl.acm.org/doi/10.1145/96709.96748

Oleksandr Manzyuk, Barak A. Pearlmutter, Alexey Andreyevich Radul, David R. Rush, and Jeffrey Mark Siskind. Perturbation Confusion in Forward Automatic Differentiation of Higher-Order Functions. Journal of Functional Programming 29, 2019. https://doi.org/10.1017/S095679681900008X

John McCarthy. Recursive functions of symbolic expressions and their computation by machine, Part I. Communications of the ACM 4(3), pp. 182–200, 1960. https://dl.acm.org/doi/10.1145/367177.367199

M. D. McIlroy. Mass Produced Software Components. In Proc. NATO Software Engineering Conference. NATO Scientific Affairs Division, pp. 79–87, 1968. https://www.scrummanager.net/files/nato1968e.pdf

Bernard Meyer. Applying Design by Contract. IEEE Computer 25(10), pp. 45–51, 1992a.

Bernard Meyer. Eiffel: The Language. Prentice Hall, 1992b.

Leo A. Meyerovich, Arjun Guha, Jacob Baskin, Gregory H. Cooper, Michael Greenberg, Aleks Bromfield, and Shriram Krishnamurthi. Flapjax: A Programming Language for Ajax Applications. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 1–12, 2009. https://dl.acm.org/doi/10.1145/1639949.1640091

Robin Milner. Fully abstract models of typed λ-calculi. Theoretical Computer Science 4(1), pp. 1–22, 1977. https://www.sciencedirect.com/science/article/pii/0304397577900536

Robin Milner. A Theory of Type Polymorphism in Programming. Journal of Computer and System Sciences 17, pp. 348–375, 1978. https://homepages.inf.ed.ac.uk/wadler/papers/papers-we-love/milner-type-polymorphism.pdf

Robin Milner. A Calculus of Communicating Systems. Lecture Notes in Computer Science, 1980. https://link.springer.com/book/10.1007/3-540-10235-3

Robin Milner. The Polyadic PI-Calculus: A Tutorial. Logic and Algebra of Specification, pp. 203–246, 1991. https://link.springer.com/chapter/10.1007/978-3-642-58041-3_6

Robin Milner, Joachim Parrow, and David Walker. A Calculus of Mobile Processes. Information and Computation, pp. 1–1, 1992. https://reader.elsevier.com/reader/sd/pii/0890540192900084?token=63ED320EDFC5CD8F1E42C5F3129711055A0075BA0E46092EFCF9849E83802AB105792B4D135985A0EB8AF7EF9949E610

James Hiram Morris. Lambda-Calculus Models of Programming Languages. Ph.D. dissertation, Massachusetts Institute of Technology, 1968.

Henrik Nilsson, Antony Courtney, and John Peterson. Functional Reactive Programming, Continued. In Proc. ACM International Conference on Functional Programming, pp. 51–64, 2002. https://dl.acm.org/doi/10.1145/581690.581695

Martin Odersky and Philip Wadler. Pizza into Java: Translating Theory into Practice. In Proc. ACM Symposium on Principles of Programming Languages, pp. 146–159, 1997. https://doi.org/10.1145/263699.263715

D. L. Parnas. A Technique for Software Module Specification with Examples. Communications of the ACM 15(5), pp. 330–336, 1972. https://doi.org/10.1145/355602.361309

Benjamin Pierce and Davide Sangiorgi. Typing and Sub-typing of Mobile Processes. In Proc. Logic in Computer Science, pp. 376–385, 1994. https://ieeexplore.ieee.org/document/287570

Atze van der Ploeg and Koen Claessen. Practical Principled FRP: Forget the Past, Change the Future, FRPNow! In Proc. ACM International Conference on Functional Programming, pp. 302–314, 2015. https://dl.acm.org/doi/10.1145/2784731.2784752

Godon D. Plotkin. Call-by-name, call-by-value, and the lambda calculus. Theoretical Computer Science 1n, pp. 125–159, 1977a. https://homepages.inf.ed.ac.uk/gdp/publications/cbn_cbv_lambda.pdf

Godon D. Plotkin. LCF considered as a programming language. Theoretical Computer Science 3(5), pp. 223–255, 1977b. https://homepages.inf.ed.ac.uk/gdp/publications/LCF.pdf

Godon D. Plotkin. A structural approach to operational semantics. University of Aarhus, DAIMI FN-19, 1981. https://homepages.inf.ed.ac.uk/gdp/publications/SOS.ps

L.B. Rall. Differentiation in {PASCAL}-{SC}. Transactions on Mathematical Software 10(2), pp. 161–184, 1984. https://doi.org/10.1145/399.418

John C. Reynolds. Definitional interpreters for higher-order programming languages. In Proc. ACM Annual Conference, pp. 717–740, 1972. https://dl.acm.org/doi/10.1145/800194.805852

John C. Reynolds. Towards a Theory of Type Structure. In Proc. Programming Symposium, pp. 408–425, 1974. https://doi.org/10.1007/3-540-06859-7_148

Jeremy G. Siek and Walid Taha. Gradual Typing for Functional Languages. In Proc. Workshop on Scheme and Functional Programming, pp. 81–92, 2006.

Jeffrey Mark Siskind and Barak A. Pearlmutter. First-Class Nonstandard Interpretations by Opening Closures. In Proc. ACM Symposium on Principles of Programming Languages, pp. 71–76, 2007a. https://doi.org/10.1145/1190216.1190230

Jeffrey Mark Siskind and Barak A. Pearlmutter. Lazy Multivariate Higher-Order Forward-Mode {AD}. In Proc. ACM Symposium on Principles of Programming Languages, pp. 155–160, 2007b. https://doi.org/10.1145/1190216.1190242

Justin Slepak, Olin Shivers, and Panagiotis Manolios. An Array-Oriented Language with Static Rank Polymorphism. In Proc. European Symposium on on Programming, pp. 27–46, 2014. https://link.springer.com/chapter/10.1007/978-3-642-54833-8_3

B. Speelpenning. Compiling Fast Partial Derivatives of Functions Given by algorithms. Ph.D. dissertation, University of Illinois, Urbana-Champagne, 1980. https://doi.org/10.2172/5254402

Joe Stoy. Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory. MIT Press, 1977. https://dl.acm.org/doi/book/10.5555/540155

K. Takeuchi, K. Honda, and M. Kubo. An Interaction-based Language and its Typing System. In Proc. International Conference on Parallel Architectures and Languages Europe, pp. 398–413, 1994. https://link.springer.com/chapter/10.1007/3-540-58184-7_118

Satish Thatte. A Type System for Implicit Scaling. Science of Computer Programming 17(1), pp. 217–245, 1991. https://www.sciencedirect.com/science/article/pii/0167642391900405

Sam Tobin-Hochstadt and Matthias Felleisen. Interlanguage Migration: from Scripts to Programs. In Proc. Dynamic Languages Symposium, pp. 964–974, 2006.

Michael M. Vitousek, Andrew Kent, Jeremy G. Siek, and Jim Baker. Design and Evaluation of Gradual Typing for Python. In Proc. Dynamic Languages Symposium, pp. 45–56, 2014. https://wphomes.soic.indiana.edu/jsiek/files/2014/03/retic-python.pdf

Michael M. Vitousek, Cameron Swords, and Jeremy G. Siek. Big Types in Little Runtime: Open-World Soundness and Collaborative Blame for Gradual Type Systems. In Proc. ACM Symposium on Principles of Programming Languages, pp. 762–774, 2017. https://dl.acm.org/doi/10.1145/3009837.3009849

William W. Wadge and Edward A. Ashcroft. Lucid, the Dataflow Programming Langugage. Academic Press, 1985. http://worrydream.com/refs/Wadge%20-%20Lucid,%20the%20Dataflow%20Programming%20Language.pdf

P. Wadler. Propositions as Sessions. In Proc. ACM International Conference on Functional Programming, pp. 273–286, 2012. https://dl.acm.org/doi/10.1145/2364527.2364568

Zhanyong Wan and Paul Hudak. Functional Reactive Programming from First Principles. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 242–252, 2000. https://dl.acm.org/doi/10.1145/349299.349331

Zhanyong Wan, Walid Taha, and Paul Hudak. Real-time FRP. In Proc. ACM International Conference on Functional Programming, pp. 45–58, 2001. https://dl.acm.org/doi/10.1145/507669.507654

R. E. Wengert. A Simple Automatic Derivative Evaluation Program. Communications of the ACM 7, pp. 463–464, 1964. https://doi.org/10.1145/355586.364791