Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Generalized Accelerated Gradient Methods for Distributed MPC Based on Dual Decomposition

Giselsson, Pontus LU orcid and Rantzer, Anders LU orcid (2014) p.309-325
Abstract
We consider distributed model predictive control (DMPC) where a sparse centralized optimization problem without a terminal cost or a terminal constraint set is solved in distributed fashion. Distribution of the optimization algorithm is enabled by dual decomposition. Gradient methods are usually used to solve the dual problem resulting from dual decomposition. However, gradient methods are known for their slow convergence rate, especially for ill-conditioned problems. This is not desirable in DMPC where the amount of communication should be kept as low as possible. In this chapter, we present a distributed optimization algorithm applied to solve optimization problems arising in DMPC that has significantly better convergence rate than the... (More)
We consider distributed model predictive control (DMPC) where a sparse centralized optimization problem without a terminal cost or a terminal constraint set is solved in distributed fashion. Distribution of the optimization algorithm is enabled by dual decomposition. Gradient methods are usually used to solve the dual problem resulting from dual decomposition. However, gradient methods are known for their slow convergence rate, especially for ill-conditioned problems. This is not desirable in DMPC where the amount of communication should be kept as low as possible. In this chapter, we present a distributed optimization algorithm applied to solve optimization problems arising in DMPC that has significantly better convergence rate than the classical gradient method. This improved convergence rate is achieved by using accelerated gradient methods instead of standard gradient methods and by in a well-defined manner, incorporating Hessian information into the gradient-iterations. We also present a stopping condition to the distributed optimization algorithm that ensures feasibility, stability and closed loop performance of the DMPC-scheme, without using a stabilizing terminal cost or terminal constraint set. (Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Distributed Model Predictive Control Made Easy
editor
Maestre, José M. and Negenborn, Rudy R.
pages
309 - 325
publisher
Springer
external identifiers
  • scopus:84896528510
ISBN
978-94-007-7005-8
DOI
10.1007/978-94-007-7006-5_19
project
LCCC
language
English
LU publication?
yes
id
d01f4a03-8493-47eb-8ae1-a25e6ef17e61 (old id 4926692)
date added to LUP
2016-04-04 12:02:07
date last changed
2023-10-18 16:18:07
@inbook{d01f4a03-8493-47eb-8ae1-a25e6ef17e61,
  abstract     = {{We consider distributed model predictive control (DMPC) where a sparse centralized optimization problem without a terminal cost or a terminal constraint set is solved in distributed fashion. Distribution of the optimization algorithm is enabled by dual decomposition. Gradient methods are usually used to solve the dual problem resulting from dual decomposition. However, gradient methods are known for their slow convergence rate, especially for ill-conditioned problems. This is not desirable in DMPC where the amount of communication should be kept as low as possible. In this chapter, we present a distributed optimization algorithm applied to solve optimization problems arising in DMPC that has significantly better convergence rate than the classical gradient method. This improved convergence rate is achieved by using accelerated gradient methods instead of standard gradient methods and by in a well-defined manner, incorporating Hessian information into the gradient-iterations. We also present a stopping condition to the distributed optimization algorithm that ensures feasibility, stability and closed loop performance of the DMPC-scheme, without using a stabilizing terminal cost or terminal constraint set.}},
  author       = {{Giselsson, Pontus and Rantzer, Anders}},
  booktitle    = {{Distributed Model Predictive Control Made Easy}},
  editor       = {{Maestre, José M. and Negenborn, Rudy R.}},
  isbn         = {{978-94-007-7005-8}},
  language     = {{eng}},
  pages        = {{309--325}},
  publisher    = {{Springer}},
  title        = {{Generalized Accelerated Gradient Methods for Distributed MPC Based on Dual Decomposition}},
  url          = {{http://dx.doi.org/10.1007/978-94-007-7006-5_19}},
  doi          = {{10.1007/978-94-007-7006-5_19}},
  year         = {{2014}},
}