LS-IQ : implicit reward regularization for inverse reinforcement learning
(2023) 11th International Conference on Learning Representations, ICLR 2023- Abstract
Recent methods for imitation learning directly learn a Q-function using an implicit reward formulation rather than an explicit reward function.However, these methods generally require implicit reward regularization to improve stability and often mistreat absorbing states.Previous works show that a squared norm regularization on the implicit reward function is effective, but do not provide a theoretical analysis of the resulting properties of the algorithms.In this work, we show that using this regularizer under a mixture distribution of the policy and the expert provides a particularly illuminating perspective: the original objective can be understood as squared Bellman error minimization, and the corresponding optimization problem... (More)
Recent methods for imitation learning directly learn a Q-function using an implicit reward formulation rather than an explicit reward function.However, these methods generally require implicit reward regularization to improve stability and often mistreat absorbing states.Previous works show that a squared norm regularization on the implicit reward function is effective, but do not provide a theoretical analysis of the resulting properties of the algorithms.In this work, we show that using this regularizer under a mixture distribution of the policy and the expert provides a particularly illuminating perspective: the original objective can be understood as squared Bellman error minimization, and the corresponding optimization problem minimizes a bounded χ2-Divergence between the expert and the mixture distribution.This perspective allows us to address instabilities and properly treat absorbing states.We show that our method, Least Squares Inverse Q-Learning (LS-IQ), outperforms state-of-the-art algorithms, particularly in environments with absorbing states.Finally, we propose to use an inverse dynamics model to learn from observations only.Using this approach, we retain performance in settings where no expert actions are available.
(Less)
- author
- Al-Hafez, Firas
; Tateo, Davide
LU
; Arenz, Oleg
; Zhao, Guoping
and Peters, Jan
- publishing date
- 2023
- type
- Contribution to conference
- publication status
- published
- subject
- conference name
- 11th International Conference on Learning Representations, ICLR 2023
- conference location
- Kigali, Rwanda
- conference dates
- 2023-05-01 - 2023-05-05
- external identifiers
-
- scopus:85165187144
- language
- English
- LU publication?
- no
- id
- db4e055a-a9ee-4f1b-a455-646a3caaea77
- date added to LUP
- 2025-10-16 14:19:19
- date last changed
- 2025-10-21 08:03:36
@misc{db4e055a-a9ee-4f1b-a455-646a3caaea77,
abstract = {{<p>Recent methods for imitation learning directly learn a Q-function using an implicit reward formulation rather than an explicit reward function.However, these methods generally require implicit reward regularization to improve stability and often mistreat absorbing states.Previous works show that a squared norm regularization on the implicit reward function is effective, but do not provide a theoretical analysis of the resulting properties of the algorithms.In this work, we show that using this regularizer under a mixture distribution of the policy and the expert provides a particularly illuminating perspective: the original objective can be understood as squared Bellman error minimization, and the corresponding optimization problem minimizes a bounded χ<sup>2</sup>-Divergence between the expert and the mixture distribution.This perspective allows us to address instabilities and properly treat absorbing states.We show that our method, Least Squares Inverse Q-Learning (LS-IQ), outperforms state-of-the-art algorithms, particularly in environments with absorbing states.Finally, we propose to use an inverse dynamics model to learn from observations only.Using this approach, we retain performance in settings where no expert actions are available.</p>}},
author = {{Al-Hafez, Firas and Tateo, Davide and Arenz, Oleg and Zhao, Guoping and Peters, Jan}},
language = {{eng}},
title = {{LS-IQ : implicit reward regularization for inverse reinforcement learning}},
year = {{2023}},
}