Rao-Blackwellisation of particle Markov chain Monte Carlo methods using forward filtering backward sampling
(2011) In IEEE Transactions on Signal Processing 59(10). p.4606-4619- Abstract
- Abstract in Undetermined
Smoothing in state-space models amounts to computing the conditional distribution of the latent state trajectory, given observations, or expectations of functionals of the state trajectory with respect to this distribution. In recent years there has been an increased interest in Monte Carlo-based methods, often involving particle filters, for approximate smoothing in nonlinear and/or non-Gaussian state-space models. One such method is to approximate filter distributions using a particle filter and then to simulate, using backward kernels, a state trajectory backwards on the set of particles. We show that by simulating multiple realizations of the particle filter and adding a Metropolis-Hastings step, one... (More) - Abstract in Undetermined
Smoothing in state-space models amounts to computing the conditional distribution of the latent state trajectory, given observations, or expectations of functionals of the state trajectory with respect to this distribution. In recent years there has been an increased interest in Monte Carlo-based methods, often involving particle filters, for approximate smoothing in nonlinear and/or non-Gaussian state-space models. One such method is to approximate filter distributions using a particle filter and then to simulate, using backward kernels, a state trajectory backwards on the set of particles. We show that by simulating multiple realizations of the particle filter and adding a Metropolis-Hastings step, one obtains a Markov chain Monte Carlo scheme whose stationary distribution is the exact smoothing distribution. This procedure expands upon a similar one recently proposed by Andrieu, Doucet, Holenstein, and Whiteley. We also show that simulating multiple trajectories from each realization of the particle filter can be beneficial from a perspective of variance versus computation time, and illustrate this idea using two examples. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/2224264
- author
- Olsson, Jimmy LU and Rydén, Tobias LU
- organization
- publishing date
- 2011
- type
- Contribution to journal
- publication status
- published
- subject
- keywords
- Hidden Markov models, Trajectory, Smoothing methods, Signal processing algorithms, Markov processes, Kernel, Joints
- in
- IEEE Transactions on Signal Processing
- volume
- 59
- issue
- 10
- pages
- 4606 - 4619
- publisher
- IEEE - Institute of Electrical and Electronics Engineers Inc.
- external identifiers
-
- wos:000297111500009
- scopus:80052890727
- ISSN
- 1053-587X
- DOI
- 10.1109/TSP.2011.2161296
- language
- English
- LU publication?
- yes
- id
- f2bba21a-64b6-476b-ad55-a1fc139ef777 (old id 2224264)
- date added to LUP
- 2016-04-01 10:26:23
- date last changed
- 2022-01-25 23:09:31
@article{f2bba21a-64b6-476b-ad55-a1fc139ef777, abstract = {{Abstract in Undetermined<br/>Smoothing in state-space models amounts to computing the conditional distribution of the latent state trajectory, given observations, or expectations of functionals of the state trajectory with respect to this distribution. In recent years there has been an increased interest in Monte Carlo-based methods, often involving particle filters, for approximate smoothing in nonlinear and/or non-Gaussian state-space models. One such method is to approximate filter distributions using a particle filter and then to simulate, using backward kernels, a state trajectory backwards on the set of particles. We show that by simulating multiple realizations of the particle filter and adding a Metropolis-Hastings step, one obtains a Markov chain Monte Carlo scheme whose stationary distribution is the exact smoothing distribution. This procedure expands upon a similar one recently proposed by Andrieu, Doucet, Holenstein, and Whiteley. We also show that simulating multiple trajectories from each realization of the particle filter can be beneficial from a perspective of variance versus computation time, and illustrate this idea using two examples.}}, author = {{Olsson, Jimmy and Rydén, Tobias}}, issn = {{1053-587X}}, keywords = {{Hidden Markov models; Trajectory; Smoothing methods; Signal processing algorithms; Markov processes; Kernel; Joints}}, language = {{eng}}, number = {{10}}, pages = {{4606--4619}}, publisher = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}}, series = {{IEEE Transactions on Signal Processing}}, title = {{Rao-Blackwellisation of particle Markov chain Monte Carlo methods using forward filtering backward sampling}}, url = {{http://dx.doi.org/10.1109/TSP.2011.2161296}}, doi = {{10.1109/TSP.2011.2161296}}, volume = {{59}}, year = {{2011}}, }