Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Sharing knowledge in multi-task deep reinforcement learning

D'Eramo, Carlo ; Tateo, Davide LU orcid ; Bonarini, Andrea ; Restelli, Marcello and Peters, Jan (2020) 8th International Conference on Learning Representations, ICLR 2020
Abstract

We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In... (More)

We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; and
publishing date
type
Contribution to conference
publication status
published
subject
conference name
8th International Conference on Learning Representations, ICLR 2020
conference location
Addis Ababa, Ethiopia
conference dates
2020-04-30
external identifiers
  • scopus:85134072457
language
English
LU publication?
no
id
9bebffc1-3495-4629-9a83-fdbb88bfeb06
date added to LUP
2025-10-16 14:39:10
date last changed
2025-10-22 13:50:44
@misc{9bebffc1-3495-4629-9a83-fdbb88bfeb06,
  abstract     = {{<p>We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.</p>}},
  author       = {{D'Eramo, Carlo and Tateo, Davide and Bonarini, Andrea and Restelli, Marcello and Peters, Jan}},
  language     = {{eng}},
  title        = {{Sharing knowledge in multi-task deep reinforcement learning}},
  year         = {{2020}},
}