Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Graph-based design of hierarchical reinforcement learning agents

Tateo, Davide LU orcid ; Erdenlig, Idil Su and Bonarini, Andrea (2019) 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 In IEEE International Conference on Intelligent Robots and Systems p.1003-1009
Abstract

There is an increasing interest in Reinforcement Learning to solve new and more challenging problems, as those emerging in robotics and unmanned autonomous vehicles. To face these complex systems, a hierarchical and multi-scale representation is crucial. This has brought the interest on Hierarchical Deep Reinforcement learning systems. Despite their successful application, Deep Reinforcement Learning systems suffer from a variety of drawbacks: they are data hungry, they lack of interpretability, and it is difficult to derive theoretical properties about their behavior. Classical Hierarchical Reinforcement Learning approaches, while not suffering from these drawbacks, are often suited for finite actions, and finite states, only.... (More)

There is an increasing interest in Reinforcement Learning to solve new and more challenging problems, as those emerging in robotics and unmanned autonomous vehicles. To face these complex systems, a hierarchical and multi-scale representation is crucial. This has brought the interest on Hierarchical Deep Reinforcement learning systems. Despite their successful application, Deep Reinforcement Learning systems suffer from a variety of drawbacks: they are data hungry, they lack of interpretability, and it is difficult to derive theoretical properties about their behavior. Classical Hierarchical Reinforcement Learning approaches, while not suffering from these drawbacks, are often suited for finite actions, and finite states, only. Furthermore, in most of the works, there is no systematic way to represent domain knowledge, which is often only embedded in the reward function.We present a novel Hierarchical Reinforcement Learning framework based on the hierarchical design approach typical of control theory. We developed our framework extending the block diagram representation of control systems to fit the needs of a Hierarchical Reinforcement Learning scenario, thus giving the possibility to integrate domain knowledge in an effective hierarchical architecture.

(Less)
Please use this url to cite or link to this publication:
author
; and
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
series title
IEEE International Conference on Intelligent Robots and Systems
article number
8968252
pages
7 pages
publisher
IEEE - Institute of Electrical and Electronics Engineers Inc.
conference name
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
conference location
Macau, China
conference dates
2019-11-03 - 2019-11-08
external identifiers
  • scopus:85081160847
ISSN
2153-0866
2153-0858
ISBN
9781728140049
DOI
10.1109/IROS40897.2019.8968252
language
English
LU publication?
no
id
64c4a250-9a1f-46da-a297-65aad1a98d0f
date added to LUP
2025-10-16 14:40:31
date last changed
2026-01-09 12:14:25
@inproceedings{64c4a250-9a1f-46da-a297-65aad1a98d0f,
  abstract     = {{<p>There is an increasing interest in Reinforcement Learning to solve new and more challenging problems, as those emerging in robotics and unmanned autonomous vehicles. To face these complex systems, a hierarchical and multi-scale representation is crucial. This has brought the interest on Hierarchical Deep Reinforcement learning systems. Despite their successful application, Deep Reinforcement Learning systems suffer from a variety of drawbacks: they are data hungry, they lack of interpretability, and it is difficult to derive theoretical properties about their behavior. Classical Hierarchical Reinforcement Learning approaches, while not suffering from these drawbacks, are often suited for finite actions, and finite states, only. Furthermore, in most of the works, there is no systematic way to represent domain knowledge, which is often only embedded in the reward function.We present a novel Hierarchical Reinforcement Learning framework based on the hierarchical design approach typical of control theory. We developed our framework extending the block diagram representation of control systems to fit the needs of a Hierarchical Reinforcement Learning scenario, thus giving the possibility to integrate domain knowledge in an effective hierarchical architecture.</p>}},
  author       = {{Tateo, Davide and Erdenlig, Idil Su and Bonarini, Andrea}},
  booktitle    = {{2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019}},
  isbn         = {{9781728140049}},
  issn         = {{2153-0866}},
  language     = {{eng}},
  pages        = {{1003--1009}},
  publisher    = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}},
  series       = {{IEEE International Conference on Intelligent Robots and Systems}},
  title        = {{Graph-based design of hierarchical reinforcement learning agents}},
  url          = {{http://dx.doi.org/10.1109/IROS40897.2019.8968252}},
  doi          = {{10.1109/IROS40897.2019.8968252}},
  year         = {{2019}},
}