Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Sequential Neural Posterior and Likelihood Approximation

Wiqvist, Samuel LU ; Picchini, Umberto LU and Frellsen, Jes (2021)
Abstract
We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA is a normalizing flows-based algorithm for inference in implicit models, and therefore is a simulation-based inference method that only requires simulations from a generative model. SNPLA avoids Markov chain Monte Carlo sampling and correction-steps of the parameter proposal function that are introduced in similar methods, but that can be numerically unstable or restrictive. By utilizing the reverse KL divergence, SNPLA manages to learn both the likelihood and the posterior in a sequential manner. Over four experiments, we show that SNPLA performs competitively when utilizing the same number of model simulations as used in other methods, even... (More)
We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA is a normalizing flows-based algorithm for inference in implicit models, and therefore is a simulation-based inference method that only requires simulations from a generative model. SNPLA avoids Markov chain Monte Carlo sampling and correction-steps of the parameter proposal function that are introduced in similar methods, but that can be numerically unstable or restrictive. By utilizing the reverse KL divergence, SNPLA manages to learn both the likelihood and the posterior in a sequential manner. Over four experiments, we show that SNPLA performs competitively when utilizing the same number of model simulations as used in other methods, even though the inference problem for SNPLA is more complex due to the joint learning of posterior and likelihood function. Due to utilizing normalizing flows SNPLA generates posterior draws much faster (4 orders of magnitude) than MCMC-based methods. (Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Working paper/Preprint
publication status
published
subject
pages
28 pages
publisher
arXiv.org
language
English
LU publication?
yes
id
3e1f3fd5-7ad0-4a4c-9e32-7a3c63627265
alternative location
https://arxiv.org/abs/2102.06522v2
date added to LUP
2021-08-27 13:17:32
date last changed
2022-02-10 15:14:49
@misc{3e1f3fd5-7ad0-4a4c-9e32-7a3c63627265,
  abstract     = {{We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA is a normalizing flows-based algorithm for inference in implicit models, and therefore is a simulation-based inference method that only requires simulations from a generative model. SNPLA avoids Markov chain Monte Carlo sampling and correction-steps of the parameter proposal function that are introduced in similar methods, but that can be numerically unstable or restrictive. By utilizing the reverse KL divergence, SNPLA manages to learn both the likelihood and the posterior in a sequential manner. Over four experiments, we show that SNPLA performs competitively when utilizing the same number of model simulations as used in other methods, even though the inference problem for SNPLA is more complex due to the joint learning of posterior and likelihood function. Due to utilizing normalizing flows SNPLA generates posterior draws much faster (4 orders of magnitude) than MCMC-based methods.}},
  author       = {{Wiqvist, Samuel and Picchini, Umberto and Frellsen, Jes}},
  language     = {{eng}},
  month        = {{06}},
  note         = {{Preprint}},
  publisher    = {{arXiv.org}},
  title        = {{Sequential Neural Posterior and Likelihood Approximation}},
  url          = {{https://arxiv.org/abs/2102.06522v2}},
  year         = {{2021}},
}