Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

A brief note on understanding neural networks as Gaussian processes

Guo, Mengwu LU (2021)
Abstract
As a generalization of the work in [Lee et al., 2017], this note briefly discusses when the prior of a neural network output follows a Gaussian process, and how a neural-network-induced Gaussian process is formulated. The posterior mean functions of such a Gaussian process regression lie in the reproducing kernel Hilbert space defined by the neural-network-induced kernel. In the case of two-layer neural networks, the induced Gaussian processes provide an interpretation of the reproducing kernel Hilbert spaces whose union forms a Barron space.
Please use this url to cite or link to this publication:
author
publishing date
type
Working paper/Preprint
publication status
published
subject
publisher
arXiv.org
language
English
LU publication?
no
id
7dc2740e-1df7-493a-abe9-3a9a5f3041a8
alternative location
https://arxiv.org/abs/2107.11892
date added to LUP
2024-03-23 22:08:21
date last changed
2024-04-17 14:45:13
@misc{7dc2740e-1df7-493a-abe9-3a9a5f3041a8,
  abstract     = {{As a generalization of the work in [Lee et al., 2017], this note briefly discusses when the prior of a neural network output follows a Gaussian process, and how a neural-network-induced Gaussian process is formulated. The posterior mean functions of such a Gaussian process regression lie in the reproducing kernel Hilbert space defined by the neural-network-induced kernel. In the case of two-layer neural networks, the induced Gaussian processes provide an interpretation of the reproducing kernel Hilbert spaces whose union forms a Barron space.}},
  author       = {{Guo, Mengwu}},
  language     = {{eng}},
  note         = {{Preprint}},
  publisher    = {{arXiv.org}},
  title        = {{A brief note on understanding neural networks as Gaussian processes}},
  url          = {{https://arxiv.org/abs/2107.11892}},
  year         = {{2021}},
}