Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

πBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization

Hvarfner, Carl LU ; Stoll, Danny ; Souza, Artur ; Lindauer, Marius ; Hutter, Frank and Nardi, Luigi LU (2022) Tenth International Conference of Learning Representations, ICLR 2022
Abstract
Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose PiBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, PiBO is conceptually simple and can easily be integrated with existing libraries and many... (More)
Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose PiBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, PiBO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when PiBO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that BO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that PiBO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 time-to-accuracy speedup over prominent BO approaches. (Less)
Please use this url to cite or link to this publication:
author
; ; ; ; and
organization
publishing date
type
Contribution to conference
publication status
published
subject
conference name
Tenth International Conference of Learning Representations, ICLR 2022
conference location
Virtual
conference dates
2022-04-25 - 2022-04-29
language
English
LU publication?
yes
id
b7c069d0-3447-4e49-a087-1fd5e932e729
date added to LUP
2022-09-19 13:33:58
date last changed
2022-10-10 14:06:56
@misc{b7c069d0-3447-4e49-a087-1fd5e932e729,
  abstract     = {{Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose PiBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, PiBO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when PiBO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that BO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that PiBO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 time-to-accuracy speedup over prominent BO approaches.}},
  author       = {{Hvarfner, Carl and Stoll, Danny and Souza, Artur and Lindauer, Marius and Hutter, Frank and Nardi, Luigi}},
  language     = {{eng}},
  month        = {{04}},
  title        = {{πBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization}},
  year         = {{2022}},
}