λ. ESOP 2018 Program Committee Workshop

Friday, 15 December, 2017
Inria Paris, Salle Jacques-Louis Lions 1+2
2 Rue Simone IFF
75012 Paris


Time Event
08:30–09:30 Breakfast
09:30–10:00 Suresh Jagannathan: Compositional Reasoning and Inference for Weak Isolation
10:00–10:30 Éric Tanter: Designing Gradual Languages
10:30–11:00 Niki Vazou: Gradual Liquid Types
11:00–11:30 Philipp Haller: Deterministic Concurrency and Large-Scale Static Analysis
11:30–12:00 Andreas Rossberg: WebAssembly: neither Web nor Assembly
12:00–14:00 Lunch Break
14:00–14:15 Coffee
14:15–14:30 Limin Jia: Stateful Network Verification without SMT
14:30–15:00 Jan Hoffmann: Resource Analysis for Probabilistic Programs
15:00–15:30 Cătălin Hriţcu: Formally Secure Compilation
15:30–16:00 Dominique Devriese: Parametricity vs. the Universal Type
16:00–16:30 Coffee Break
16:30–17:00 Marco Gaboardi: Relational Symbolic Execution
17:00–17:30 Naoki Kobayashi: Pumping Lemma for Higher-Order Languages


Compositional Reasoning and Inference for Weak Isolation

Suresh Jagannathan

Serializability is a well-understood correctness criterion that simplifies reasoning about the behaviour of concurrent transactions by ensuring they are isolated from each other while they execute. Unfortunately, databases often encourage developers to implement transactions using weaker alternatives. These alternatives break the strong isolation guarantees offered by serializability to permit greater concurrency. The semantics of weak isolation, however, is usually explained only informally in terms of low-level implementation artifacts; verifying the correctness of database applications becomes very challenging as a result.

To address this issue, we present a novel program logic that enables compositional reasoning about the behaviour of concurrently executing weakly-isolated transactions. We also describe an inference procedure based on this foundation that ascertains the weakest isolation level that still guarantees the safety of high-level consistency invariants associated with such transactions. The key to effective inference is the observation that weakly-isolated transactions can be viewed as monadic computations over an abstract database state, allowing us to treat their operations as state transformers over the database. This interpretation enables automated verification using off-the-shelf SMT solvers. Our results allow us to place weakly-isolated transactions on the same formal footing as their serializable counterparts.

Designing Gradual Languages

Éric Tanter

Gradualizing type systems often involves a lot of guesswork. Based on an understanding of gradual types as abstractions of static types, we have developed systematic foundations for gradual typing based on abstract interpretation. This talk reports on recent work in applying this approach to a number of typing disciplines. In particular, we highlight the value of Galois connections in gradual language design, and identify open research questions.

Gradual Liquid Types

Niki Vazou

We present gradual liquid type inference, a novel combination of refinement types with gradual refinements that range over a finite set of SMT-decidable predicates. This finiteness restriction allows for an algorithmic inference procedure where all possibly valid interpretations of a gradual refinement are exhaustively checked. Thanks to exhaustive searching we can detect the safe concretizations, i.e. the concrete refinements that justify why a program with gradual refinements is well-typed. We make the novel observation that gradual liquid type inference can be used for static liquid type error explanation, since the safe concretizations exhibit all the potential inconsistencies that lead to type errors. Based on Liquid Haskell, we implement gradual liquid type inference in GuiLT, a tool that interactively presents all the safe concretizations of gradual liquid types and demonstrate its utility for user-guided migration of three commonly-used Haskell list manipulation libraries.

Deterministic Concurrency and Large-Scale Static Analysis

Philipp Haller

Concurrent programming is infamous for its difficulty. An important source of difficulty is non-determinism, stemming from unpredictable interleavings of concurrent activities. Futures and promises are widely-used abstractions that help programmers create deterministic concurrent programs; however, deterministic execution cannot be guaranteed statically in mainstream programming languages. Deterministic-by-construction concurrent programming models avoid this issue, but they typically restrict expressiveness in important ways.

This talk presents a concurrent programming model, called Reactive Async, which is deterministic by construction. In this programming model concurrent computations communicate using so-called cells, shared locations which generalize futures as well as recent deterministic abstractions, such as LVars. Compared to previously proposed programming models Reactive Async provides (a) a fallback mechanism for the case where no computation ever computes the value of a given cell, and (b) explicit and optimized handling of cyclic dependencies. The talk also reports on an implementation of the programming model as a Scala extension, and on our experience implementing large-scale, concurrent static analyses for JVM bytecode.

Toward Implicit Incremental Computing with IODyn

Matthew Hammer

This talk will cover our group’s work (in progress) toward building higher-level, implicit abstractions for incremental computations. Our recent work has focused on IODyn, a general-purpose functional language for implicit incremental computing over dynamic collections (sets, maps, sequences, etc). We are presently proving a meta theory about IODyn as a core calculus, and implementing IODyn as a DSL embedded in Rust. Using this prototype, we are implementing benchmarks for classic and novel incremental algorithms. The design and implementation of IODyn consists of compilation, run-time and algorithmic challenges, presenting novel problems in PL and algorithm design.

Stateful Network Verification without SMT

Limin Jia

Enterprise networks often use middleboxes such as firewalls, NATs, and load balancers to enforce reachability, reliability, and security policies. Formally verifying that desired policies are correctly implemented is challenging in the presence of stateful middleboxes. For instance, a stateful firewall may decide on what to do with packets between two hosts based on prior connection state between them. Existing tools reduce the verification problem to SMT constraint solving. We approach this problem from a different angle; by removing dependence on SMT solvers, we are able to develop an efficient verification tool for stateful networks using symbolic model checking. Our approach is connected to the query containment problem. Our initial evaluation shows that our tool can handle more expressive policies and is magnitudes faster than an SMT-based tool.

Resource Analysis for Probabilistic Programs

Jan Hoffmann

This talk presents a new static analysis for deriving upper bounds on the expected resource consumption of probabilistic programs. The analysis is fully automatic and derives symbolic bounds that are multivariate polynomials of the inputs. The new technique combines manual state-of-the-art reasoning techniques for probabilistic programs with an effective method for automatic resource-bound analysis of deterministic programs. It can be seen as both, an extension of automatic amortized resource analysis (AARA) to probabilistic programs and an automation of manual reasoning for probabilistic programs that is based on weakest preconditions. An advantage of the technique is that it combines the clarity and compositionality of a weakest-precondition calculus with the efficient automation of AARA. As a result, bound inference can be reduced to off-the-shelf LP solving in many cases and automatically-derived bounds can be interactively extended with standard program logics if the automation fails. Building on existing work, the soundness of the analysis is proved with respect to an operational semantics that is based on Markov decision processes. The effectiveness of the technique is demonstrated with a prototype implementation that is used to automatically analyze 38 challenging probabilistic programs and randomized algorithms. Experimental results indicate that the derived constant factors in the bounds are very precise and even optimal for many programs.

WebAssembly: neither Web nor Assembly

Andreas Rossberg

WebAssembly is a new low-level byte code that offers compact representation, fast validation and compilation, and low overhead safe execution. Rather than committing to a specific programming model, WebAssembly is an abstraction over modern hardware, making it both language- and platform-independent. WebAssembly has been designed collaboratively as an open standard and has been defined with a complete formal semantics from the get-go.

I’ll present WebAssembly, its goals and features, and where we hope to take it.

Parametricity vs. the Universal Type

Dominique Devriese

There has long been speculation in the scientific literature on how to dynamically enforce parametricity such as that yielded by System F. Almost 20 years ago, Sumii and Pierce proposed a formal compiler from System F into the cryptographic lambda calculus: an untyped lambda calculus extended with an idealised model of encryption. They conjectured that this compiler was fully abstract, i.e. that compiled terms are contextually equivalent if and only if the original terms were, a property that can be seen as a form of secure compilation. The conjecture has received attention in several other publications since then, but remains open to this day.

More recently, several researchers have been looking at gradually-typed languages that extend System F. In this setting it is natural to wonder whether embedding System F into these gradually-typed languages preserves contextual equivalence and thus parametricity.

In this paper, we answer both questions negatively. We provide a concrete counterexample: two System F terms whose contextual equivalence is not preserved by the Sumii-Pierce compiler, nor the embedding into the polymorphic blame calculus. This counterexample relies on the absence in System F of what we called a ``Universal’’ type, i.e., a type where all other types can be injected to and extracted from. As the languages in which System F is compiled have a universal type, the compilation cannot be fully abstract; this paper explains why.

Relational Symbolic Execution

Marco Goboardi

Symbolic execution is a classical program analysis technique, widely used for program testing and bug finding. In this work we generalize symbolic execution to support program analysis for relational properties, namely properties about two programs, or about two executions of a single program on different inputs. We design a relational symbolic execution engine, named RelSym, which supports testing and bug finding for relational properties of imperative programs with for-loops and arrays. RelSym also combines relational symbolic execution with program logics. This combination is useful not only to test but also to prove relational properties, by exploiting the finite structure of arrays and for-loops. We demonstrate the flexibility of RelSym by testing and proving relational properties for examples from different domains, such as information flow, program equivalence, sensitivity analysis, and relational cost analysis.

Pumping Lemma for Higher-Order Languages

Naoki Kobayashi

We study a pumping lemma for the word/tree languages generated by higher-order grammars. Pumping lemmas are known up to order-2 word languages (i.e., for regular/context-free/indexed languages), and have been used to show that a given language does not belong to the classes of regular/context-free/indexed languages. We prove a pumping lemma for word/tree languages of arbitrary orders, modulo a conjecture that a higher-order version of Kruskal’s tree theorem holds. We also show that the conjecture indeed holds for the order-2 case, which yields a pumping lemma for order-2 tree languages and order-3 word languages. The proof makes use of various techniques for simply-typed lambda-calculus, thus demonstrating usefulness of programming language techniques for formal language theory. This is joint work with Kazuyuki Asada, presented at ICAP 2017.

Formally Secure Compilation

Cătălin Hriţcu

Severe low-level vulnerabilities abound in today’s computer systems, allowing cyber-attackers to remotely gain full control. This happens in big part because our programming languages, compilers, and architectures were designed in an era of scarce hardware resources and too often trade off security for efficiency. The semantics of mainstream low-level languages like C is inherently insecure, and even for safer languages, establishing security with respect to a high-level semantics does not guarantee the absence of low-level attacks.

SECOMP is a new project aimed at leveraging emerging hardware capabilities for fine-grained protection to build the first, efficient secure compilers for realistic low-level programming languages (the C language, and Low* a safe subset of C embedded in F* for verification). These compilers will provide a more secure semantics for source programs and will ensure that high-level abstractions cannot be violated even when interacting with untrusted low-level code. To achieve this level of security without sacrificing efficiency, our secure compilers target a tagged architecture, which associates a metadata tag to each word and efficiently propagates and checks tags according to software-defined rules. We are using property-based testing and formal verification to provide high confidence that our compilers are indeed secure.

Formally, we are constructing machine-checked proofs in Coq of fully abstract compilation and of novel secure compilation criteria that ensure the preservation of large classes of hyperproperties even against an adversarial context. These strong properties complement compiler correctness and ensure that no machine-code attacker can do more harm to securely compiled components than a component already could with respect to a secure source-level semantics.


Lunch is not prearranged, but thanks to our Inria hosts, we have recommendations for places to eat. They fall into two categories: restaurants and quick eateries. Below, you’ll find a map of all the recommended places, and two categorized lists.


Quick Eateries