indii.org
http://www.indii.org/
Lawrence Murray: software, research, photography.en-usTue, 20 Nov 2018 22:45:54 +0800Tue, 20 Nov 2018 22:45:54 +0800The Kimberley
http://www.indii.org/archives/the-kimberley/index.html
Tue, 20 Nov 2018 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/archives/the-kimberley/index.htmlGorges and waterholes in the remote north-west of Australia.Automated learning with a probabilistic programming language: Birch
http://www.indii.org/research/automated-learning-with-a-probabilistic-programming-language-birch/index.html
Sat, 13 Oct 2018 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/research/automated-learning-with-a-probabilistic-programming-language-birch/index.htmlThis work offers a broad perspective on probabilistic modeling and inference in light of recent advances in probabilistic programming, in which models are formally expressed in Turing-complete programming languages. We consider a typical workflow and how probabilistic programming languages can help to automate this workflow, especially in the matching of models with inference methods. We focus on two properties of a model that are critical in this matching: its structure—the conditional dependencies between random variables—and its form—the precise mathematical definition of those dependencies. While the structure and form of a probabilistic model are often fixed a priori, it is a curiosity of probabilistic programming that they need not be, and may instead vary according to random choices made during program execution. We introduce a formal description of models expressed as programs, and discuss some of the ways in which probabilistic programming languages can reveal the structure and form of these, in order to tailor inference methods. We demonstrate the ideas with a new probabilistic programming language called Birch, with a multiple object tracking example.
Saltoluokta
http://www.indii.org/archives/saltoluokta/index.html
Sat, 13 Oct 2018 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/archives/saltoluokta/index.htmlSummer north of the arctic circle.Birch
http://www.indii.org/http:/www.birch-lang.org
Sat, 24 Mar 2018 00:00:00 +0800Lawrence Murrayhttp://www.indii.orghttp://www.birch-lang.orgAn object-oriented, universal probabilistic programming language.Sahara
http://www.indii.org/archives/sahara/index.html
Sun, 11 Mar 2018 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/archives/sahara/index.htmlBack to Morocco.Improving the particle filter for high-dimensional problems using artificial process noise
http://www.indii.org/research/improving-the-particle-filter-for-high-dimensional-problems/index.html
Thu, 01 Mar 2018 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/research/improving-the-particle-filter-for-high-dimensional-problems/index.htmlThe particle filter is one of the most successful methods for state inference and identification of general non-linear and non-Gaussian models. However, standard particle filters suffer from degeneracy of the particle weights for high-dimensional problems. We propose a method for improving the performance of the particle filter for certain challenging state space models, with implications for high-dimensional inference. First we approximate the model by adding artificial process noise in an additional state update, then we design a proposal that combines the standard and the locally optimal proposal. This results in a bias-variance trade-off, where adding more noise reduces the variance of the estimate but increases the model bias. The performance of the proposed method is evaluated on a linear Gaussian state space model and on the non-linear Lorenz’96 model. For both models we observe a significant improvement in performance over the standard particle filter.
Under the Dhow Sail
http://www.indii.org/archives/under-the-dhow-sail/index.html
Mon, 01 Jan 2018 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/archives/under-the-dhow-sail/index.htmlSailing in Zanzibar.Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs
http://www.indii.org/research/delayed-sampling-and-automatic-rao-blackwellization-of-probabilistic-programs/index.html
Fri, 22 Dec 2017 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/research/delayed-sampling-and-automatic-rao-blackwellization-of-probabilistic-programs/index.htmlWe introduce a dynamic mechanism for the solution of analytically-tractable substructure in probabilistic programs, to reduce variance in Monte Carlo estimators. For inference with Sequential Monte Carlo, it yields improvements such as locally-optimal proposals and Rao-Blackwellization, with little modification to model code necessary. A directed graph is maintained alongside the running program, evolving dynamically as the program triggers operations upon it. Nodes of the graph represent random variables, and edges the analytically-tractable relationships between them (e.g. conjugate priors and affine transformations). Each random variable is held in the graph for as long as possible, sampled only when used by the program in a context that cannot be resolved analytically. This allows it to be analytically conditioned on as many observations as possible before sampling. We have implemented the approach in both Anglican and a new probabilistic programming language named Birch. We demonstrate it on a number of small examples, and a larger mixed linear-nonlinear state-space model.
Riddarholmen
http://www.indii.org/archives/riddarholmen/index.html
Wed, 01 Nov 2017 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/archives/riddarholmen/index.htmlThe island of Riddarholmen in central Stockholm, Sweden.Better together? Statistical learning in models made of modules
http://www.indii.org/research/better-together-statistical-learning-in-models-made-of-modules/index.html
Tue, 29 Aug 2017 00:00:00 +0800Lawrence Murrayhttp://www.indii.org/research/better-together-statistical-learning-in-models-made-of-modules/index.htmlIn modern applications, statisticians are faced with integrating heterogeneous data modalities relevant for an inference, prediction, or decision problem. In such circumstances, it is convenient to use a graphical model to represent the statistical dependencies, via a set of connected “modules”, each relating to a specific data modality, and drawing on specific domain expertise in their development. In principle, given data, the conventional statistical update then allows for coherent uncertainty quantification and information propagation through and across the modules. However, misspecification of any module can contaminate the estimate and update of others, often in unpredictable ways. In various settings, particularly when certain modules are trusted more than others, practitioners have preferred to avoid learning with the full model in favor of approaches that restrict the information propagation between modules, for example by restricting propagation to only particular directions along the edges of the graph. In this article, we investigate why these modular approaches might be preferable to the full model in misspecified settings. We propose principled criteria to choose between modular and full-model approaches. The question arises in many applied settings, including large stochastic dynamical systems, meta-analysis, epidemiological models, air pollution models, pharmacokinetics-pharmacodynamics, and causal inference with propensity scores.