**Jan-Willem van de Meent**

Assistant Professor

Northeastern University

College of Computer and Information Science

440 Huntington Avenue

250 West Village H

Boston, MA 02115

+1 (617) 373 7696

**Group**

Eli Sennesh, Ph. D. Candidate

Babak Esmaeli, Ph.D. Candidate

Hao Wu, Ph.D. Candidate

I am an assistant professor in the College of Computer and Information Science at Northeastern. I work on probabilistic programming frameworks, libraries that provide building blocks for model development in data science, machine learning, and artificial intelligence. I am one of the creators of the Anglican, a probabilistic programming system that is closely integrated with Clojure. I am currently developing of Probabilistic Torch, a library for deep generative models that extends PyTorch.

I did a post-doc in the group of Frank Wood at Oxford. I did a PhD in biophysics with Ray Goldstein (Cambridge) and Wim van Saarloos (Leiden). I have also worked on applications of machine learning in single-molecule biophysics at Columbia with Chris Wiggins and Ruben Gonzalez.

**MAY 2018 ∙** New pre-print by our students Babak, Hao, and Sarthak on learning stuctured disentangled representations ArXiv.

**DEC 2017 ∙** We have open-sourced *Probabilistic Torch* [github], a library for deep generative models that extends PyTorch. This release accompanies our paper at *NIPS* [paper].

**DEC 2017 ∙** Our extended abstract *“Inference Trees: Adaptive Inference with Exploration”* was accepted at the *NIPS workshop on Advances in Approximate Bayesian Inference* [website].

**NOV 2017 ∙** I am teaching *CS 7140 – Advanced Machine Learning* this spring [website].

**NOV 2017 ∙** I will speak at the *AAAI 2018 Workshop on Planning and Inference* in Februari [website]

**SEP 2017 ∙** Our paper *“Learning Disentangled Representations with Semi-Supervised Deep Generative Models”* has been accepted for publication at *NIPS* [arxiv].

Structured Disentangled Representations

Babak Esmaeli,
Hao Wu,
Sarthak Jain,
Alican Bozkurt,
N. Siddharth,
Brooks Paige,
Dana H. Brooks,
Jennifer Dy,
Jan-Willem van de Meent.

Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard objective function. These approaches generally assume a simple diagonal Gaussian prior and as a result are not able to reliably disentangle discrete factors of variation. We propose a two-level hierarchical objective to control relative degree of statistical independence between blocks of variables and individual variables within blocks.

[PDF]

Inference Trees: Adaptive Inference with Exploration

Tom Rainforth,
Yuan Zhou,
Xiaoyu Lu,
Yee Whye Teh,
Frank Wood,
Hongseok Yang,
Jan-Willem van de Meent.

We introduce inference trees (ITs), a new adaptive Monte Carlo inference method building on ideas from Monte Carlo tree search. Unlike most existing methods which are implicitly based on pure exploitation, ITs explicitly aim to balance exploration and exploitation in the inference process, alleviating common pathologies and ensuring consistency. More specifically, ITs use bandit strategies to adaptively sample from hierarchical partitions of the parameter space, while simultaneously learning these partitions in an online manner.

[PDF]

Learning Disentangled Representations with Semi-Supervised Deep Generative Models

N. Siddharth,
Brooks Paige,
Jan-Willem van de Meent,
Alban Desmaison,
Noah D. Goodman,
Pushmeet Kohli,
Frank Wood,
Philip H.S. Torr

We propose to learn disentangled representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder. This allows us to train partially-specified models that make relatively strong assumptions about a subset of interpretable variables and rely on the flexibility of neural networks to learn representations for the remaining variables.

Bayesian Optimization for Probabilistic Programs

We present the first general purpose framework for marginal maximum a pos- teriori estimation of probabilistic program variables. By using a series of code transformations, the evidence of any probabilistic program, and therefore of any graphical model, can be optimized with respect to an arbitrary subset of its sampled variables. To carry out this optimization, we develop the first Bayesian optimization package to directly exploit the source code of its target, leading to innovations in problem-independent hyperpriors, unbounded optimization, and implicit constraint satisfaction.

Design and Implementation of Probabilistic Programming Language Anglican

We present the first general purpose framework for marginal maximum a pos- teriori estimation of probabilistic program variables. By using a series of code transformations, the evidence of any probabilistic program, and therefore of any graphical model, can be optimized with respect to an arbitrary subset of its sampled variables. To carry out this optimization, we develop the first Bayesian optimization package to directly exploit the source code of its target, leading to innovations in problem-independent hyperpriors, unbounded optimization, and implicit constraint satisfaction.

Black-Box Policy Search with Probabilistic Programs

In this work we show how to represent policies as programs: that is, as stochastic simulators with tunable parameters. To learn the parameters of such policies we develop connections between black box variational inference and existing policy search approaches. We then explain how such learning can be implemented in a probabilistic programming system. We demonstrate both conciseness of policy representation and automatic policy parameter learning for a set of canonical reinforcement learning problems.

Particle Gibbs with Ancestor Sampling for Probabilistic Programs

Particle Markov chain Monte Carlo techniques rank among current state-of-the-art methods for probabilistic program inference. A drawback of these techniques is that they rely on importance resampling, which results in degenerate particle trajectories and a low effective sample size for variables sampled early in a program. We here develop a for- malism to adapt ancestor resampling, a technique that mitigates particle degeneracy, to the probabilistic programming setting.

[PDF]

A New Approach to Probabilistic Programming Inference

We demonstrate a new approach to inference in expressive probabilistic programming languages based on particle Markov chain Monte Carlo. It applies to Turing-complete proba- bilistic programming languages and supports accurate inference in models that make use of complex control flow, including stochas- tic recursion. It also includes primitives from Bayesian nonparametric statistics. Our experiments show that this approach can be more e cient than previously introduced single-site Metropolis-Hastings methods.

Empirical Bayes Methods Enable Advanced Population-Level Analyses of Single-Molecule FRET Experiments

We demonstrate a new approach to inference in expressive probabilistic programming languages based on particle Markov chain Monte Carlo. It applies to Turing-complete proba- bilistic programming languages and supports accurate inference in models that make use of complex control flow, including stochas- tic recursion. It also includes primitives from Bayesian nonparametric statistics. Our experiments show that this approach can be more e cient than previously introduced single-site Metropolis-Hastings methods.