Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Learning Continuous Normalizing Flows For Faster Convergence To Target Distribution via Ascent Regularizations

Chen, Shuangshuang ; Ding, Sihao ; Karayiannidis, Yiannis LU orcid and Björkman, Mårten (2023)
Abstract
Normalizing flows (NFs) have been shown to be advantageous in modeling complex distributions and improving sampling efficiency for unbiased sampling. In this work, we propose a new class of continuous NFs, ascent continuous normalizing flows (ACNFs), that makes a base distribution converge faster to a target distribution. As solving such a flow is non-trivial and barely possible, we propose a practical implementation to learn flexibly parametric ACNFs via ascent regularization and apply it in two learning cases: maximum likelihood learning for density estimation and minimizing reverse KL divergence for unbiased sampling and variational inference. The learned ACNFs demonstrate faster convergence towards the target distributions, therefore,... (More)
Normalizing flows (NFs) have been shown to be advantageous in modeling complex distributions and improving sampling efficiency for unbiased sampling. In this work, we propose a new class of continuous NFs, ascent continuous normalizing flows (ACNFs), that makes a base distribution converge faster to a target distribution. As solving such a flow is non-trivial and barely possible, we propose a practical implementation to learn flexibly parametric ACNFs via ascent regularization and apply it in two learning cases: maximum likelihood learning for density estimation and minimizing reverse KL divergence for unbiased sampling and variational inference. The learned ACNFs demonstrate faster convergence towards the target distributions, therefore, achieving better density estimations, unbiased sampling and variational approximation at lower computational costs. Furthermore, the flows show to stabilize themselves to mitigate performance deterioration and are less sensitive to the choice of training flow length T.
(Less)
Please use this url to cite or link to this publication:
author
; ; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
The Eleventh International Conference on Learning Representations, ICLR
project
RobotLab LTH
language
English
LU publication?
yes
id
354f3a68-01ec-4e61-80c6-932d9114f540
alternative location
https://openreview.net/forum?id=6iEoTr-jeB7
date added to LUP
2024-09-27 20:20:45
date last changed
2025-04-04 15:08:03
@inproceedings{354f3a68-01ec-4e61-80c6-932d9114f540,
  abstract     = {{Normalizing flows (NFs) have been shown to be advantageous in modeling complex distributions and improving sampling efficiency for unbiased sampling. In this work, we propose a new class of continuous NFs, ascent continuous normalizing flows (ACNFs), that makes a base distribution converge faster to a target distribution. As solving such a flow is non-trivial and barely possible, we propose a practical implementation to learn flexibly parametric ACNFs via ascent regularization and apply it in two learning cases: maximum likelihood learning for density estimation and minimizing reverse KL divergence for unbiased sampling and variational inference. The learned ACNFs demonstrate faster convergence towards the target distributions, therefore, achieving better density estimations, unbiased sampling and variational approximation at lower computational costs. Furthermore, the flows show to stabilize themselves to mitigate performance deterioration and are less sensitive to the choice of training flow length T.<br/>}},
  author       = {{Chen, Shuangshuang and Ding, Sihao and Karayiannidis, Yiannis and Björkman, Mårten}},
  booktitle    = {{The Eleventh International Conference on Learning Representations, ICLR}},
  language     = {{eng}},
  month        = {{02}},
  title        = {{Learning Continuous Normalizing Flows For Faster Convergence To Target Distribution via Ascent Regularizations}},
  url          = {{https://openreview.net/forum?id=6iEoTr-jeB7}},
  year         = {{2023}},
}