Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Self adaptive numerical methods with reinforcement learning

Andersson, Mathias LU (2019) In Bachelor's Theses in Mathematical Sciences NUMK11 20191
Centre for Mathematical Sciences
Abstract
To find approximate solutions to initial value problems (IVPs) one can use a wide range of numerical methods. A special group of IVPs referred to as stiff, can be numerically solved by a chain of methods, beginning with an implicit method then Newton's method and last, General minimal residual (GMRES). For GMRES to be more time-efficient, we need to apply a preconditioner to the system of equations. However, computing a preconditioner is also time-consuming and since the Jacobian matrix does not change very much between Newton iterations, we can use the same preconditioner for multiple Newton iterations. There currently does not exist any method to exactly determine when it is most time-efficient to compute a new preconditioner and when to... (More)
To find approximate solutions to initial value problems (IVPs) one can use a wide range of numerical methods. A special group of IVPs referred to as stiff, can be numerically solved by a chain of methods, beginning with an implicit method then Newton's method and last, General minimal residual (GMRES). For GMRES to be more time-efficient, we need to apply a preconditioner to the system of equations. However, computing a preconditioner is also time-consuming and since the Jacobian matrix does not change very much between Newton iterations, we can use the same preconditioner for multiple Newton iterations. There currently does not exist any method to exactly determine when it is most time-efficient to compute a new preconditioner and when to continue with the current one. This paper explores two methods that aim to approximate the point at which to calculate a new preconditioner, to save time solving the IVP. The two methods are based on estimating the future time cost. The first method will estimate the future cost by looking at previous Newton iteration and GMRES iterations. The second method will estimate the cost by trying to approximate the Lipschitz constant. On the test done in this paper, both methods are shown to slightly decrease the time, and arguments are given for why it should work in other cases too. (Less)
Please use this url to cite or link to this publication:
author
Andersson, Mathias LU
supervisor
organization
course
NUMK11 20191
year
type
M2 - Bachelor Degree
subject
keywords
Self adaptive, numerical methods, GMRES, Newton's method, The implicit Euler method, preconditioning, Burgers' equation
publication/series
Bachelor's Theses in Mathematical Sciences
report number
LUNFNA-4028-2016
ISSN
1654-6229
other publication id
2016:K25
language
English
id
8994826
date added to LUP
2024-10-03 16:44:58
date last changed
2024-10-03 16:44:58
@misc{8994826,
  abstract     = {{To find approximate solutions to initial value problems (IVPs) one can use a wide range of numerical methods. A special group of IVPs referred to as stiff, can be numerically solved by a chain of methods, beginning with an implicit method then Newton's method and last, General minimal residual (GMRES). For GMRES to be more time-efficient, we need to apply a preconditioner to the system of equations. However, computing a preconditioner is also time-consuming and since the Jacobian matrix does not change very much between Newton iterations, we can use the same preconditioner for multiple Newton iterations. There currently does not exist any method to exactly determine when it is most time-efficient to compute a new preconditioner and when to continue with the current one. This paper explores two methods that aim to approximate the point at which to calculate a new preconditioner, to save time solving the IVP. The two methods are based on estimating the future time cost. The first method will estimate the future cost by looking at previous Newton iteration and GMRES iterations. The second method will estimate the cost by trying to approximate the Lipschitz constant. On the test done in this paper, both methods are shown to slightly decrease the time, and arguments are given for why it should work in other cases too.}},
  author       = {{Andersson, Mathias}},
  issn         = {{1654-6229}},
  language     = {{eng}},
  note         = {{Student Paper}},
  series       = {{Bachelor's Theses in Mathematical Sciences}},
  title        = {{Self adaptive numerical methods with reinforcement learning}},
  year         = {{2019}},
}