Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Dimensionality reduction and prioritized exploration for policy search

Memmel, Marius ; Liu, Puze ; Tateo, Davide LU orcid and Peters, Jan (2022) 25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022 In Proceedings of Machine Learning Research 151. p.2134-2157
Abstract

Black-box policy optimization is a class of reinforcement learning algorithms that explores and updates the policies at the parameter level. This class of algorithms is widely applied in robotics with movement primitives or non-differentiable policies. Furthermore, these approaches are particularly relevant where exploration at the action level could cause actuator damage or other safety issues. However, Black-box optimization does not scale well with the increasing dimensionality of the policy, leading to high demand for samples, which are expensive to obtain in real-world systems. In many practical applications, policy parameters do not contribute equally to the return. Identifying the most relevant parameters allows to narrow down... (More)

Black-box policy optimization is a class of reinforcement learning algorithms that explores and updates the policies at the parameter level. This class of algorithms is widely applied in robotics with movement primitives or non-differentiable policies. Furthermore, these approaches are particularly relevant where exploration at the action level could cause actuator damage or other safety issues. However, Black-box optimization does not scale well with the increasing dimensionality of the policy, leading to high demand for samples, which are expensive to obtain in real-world systems. In many practical applications, policy parameters do not contribute equally to the return. Identifying the most relevant parameters allows to narrow down the exploration and speed up the learning. Furthermore, updating only the effective parameters requires fewer samples, improving the scalability of the method. We present a novel method to prioritize the exploration of effective parameters and cope with full covariance matrix updates. Our algorithm learns faster than recent approaches and requires fewer samples to achieve state-of-the-art results. To select the effective parameters, we consider both the Pearson correlation coefficient and the Mutual Information. We showcase the capabilities of our approach on the Relative Entropy Policy Search algorithm in several simulated environments, including robotics simulations. Code is available at git.ias.informatik.tudarmstadt.de/ias code/aistats2022/dr-creps.

(Less)
Please use this url to cite or link to this publication:
author
; ; and
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022
series title
Proceedings of Machine Learning Research
volume
151
pages
24 pages
conference name
25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022
conference location
Virtual, Online, Spain
conference dates
2022-03-28 - 2022-03-30
external identifiers
  • scopus:85163092866
ISSN
2640-3498
language
English
LU publication?
no
id
7f2bb0d6-c18b-4ca0-9e04-a23c388e3392
alternative location
https://proceedings.mlr.press/v151/memmel22a.html
date added to LUP
2025-10-16 14:32:20
date last changed
2025-10-21 08:17:30
@inproceedings{7f2bb0d6-c18b-4ca0-9e04-a23c388e3392,
  abstract     = {{<p>Black-box policy optimization is a class of reinforcement learning algorithms that explores and updates the policies at the parameter level. This class of algorithms is widely applied in robotics with movement primitives or non-differentiable policies. Furthermore, these approaches are particularly relevant where exploration at the action level could cause actuator damage or other safety issues. However, Black-box optimization does not scale well with the increasing dimensionality of the policy, leading to high demand for samples, which are expensive to obtain in real-world systems. In many practical applications, policy parameters do not contribute equally to the return. Identifying the most relevant parameters allows to narrow down the exploration and speed up the learning. Furthermore, updating only the effective parameters requires fewer samples, improving the scalability of the method. We present a novel method to prioritize the exploration of effective parameters and cope with full covariance matrix updates. Our algorithm learns faster than recent approaches and requires fewer samples to achieve state-of-the-art results. To select the effective parameters, we consider both the Pearson correlation coefficient and the Mutual Information. We showcase the capabilities of our approach on the Relative Entropy Policy Search algorithm in several simulated environments, including robotics simulations. Code is available at git.ias.informatik.tudarmstadt.de/ias code/aistats2022/dr-creps.</p>}},
  author       = {{Memmel, Marius and Liu, Puze and Tateo, Davide and Peters, Jan}},
  booktitle    = {{25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022}},
  issn         = {{2640-3498}},
  language     = {{eng}},
  pages        = {{2134--2157}},
  series       = {{Proceedings of Machine Learning Research}},
  title        = {{Dimensionality reduction and prioritized exploration for policy search}},
  url          = {{https://proceedings.mlr.press/v151/memmel22a.html}},
  volume       = {{151}},
  year         = {{2022}},
}