Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Neos : End-to-End-Optimised Summary Statistics for High Energy Physics

Simpson, Nathan LU and Heinrich, Lukas (2023) 20th International Workshop on Advanced Computing and Analysis Techniques in Physics Research, ACAT 2021 2438.
Abstract

The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because training a neural network equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This work introduces neos: an example implementation following this paradigm of a fully differentiable high-energy physics workflow, capable of optimising a learnable summary statistic with respect to the expected sensitivity of an analysis. Doing this results in an optimisation process that is aware... (More)

The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because training a neural network equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This work introduces neos: an example implementation following this paradigm of a fully differentiable high-energy physics workflow, capable of optimising a learnable summary statistic with respect to the expected sensitivity of an analysis. Doing this results in an optimisation process that is aware of the modelling and treatment of systematic uncertainties.

(Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Journal of Physics: Conference Series
volume
2438
article number
012105
conference name
20th International Workshop on Advanced Computing and Analysis Techniques in Physics Research, ACAT 2021
conference location
Daejeon, Virtual, Korea, Republic of
conference dates
2021-11-29 - 2021-12-03
external identifiers
  • scopus:85149727518
DOI
10.1088/1742-6596/2438/1/012105
language
English
LU publication?
yes
id
2388ed46-b5af-4df5-bcdf-b6ba9c08e765
date added to LUP
2023-04-06 13:23:34
date last changed
2023-04-06 13:23:59
@inproceedings{2388ed46-b5af-4df5-bcdf-b6ba9c08e765,
  abstract     = {{<p>The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because training a neural network equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This work introduces neos: an example implementation following this paradigm of a fully differentiable high-energy physics workflow, capable of optimising a learnable summary statistic with respect to the expected sensitivity of an analysis. Doing this results in an optimisation process that is aware of the modelling and treatment of systematic uncertainties.</p>}},
  author       = {{Simpson, Nathan and Heinrich, Lukas}},
  booktitle    = {{Journal of Physics: Conference Series}},
  language     = {{eng}},
  title        = {{Neos : End-to-End-Optimised Summary Statistics for High Energy Physics}},
  url          = {{http://dx.doi.org/10.1088/1742-6596/2438/1/012105}},
  doi          = {{10.1088/1742-6596/2438/1/012105}},
  volume       = {{2438}},
  year         = {{2023}},
}