Mushroomrl : simplifying reinforcement learning research
(2021) In Journal of Machine Learning Research 22.- Abstract
MushroomRL is an open-source Python library developed to simplify the process of im- plementing and running Reinforcement Learning (RL) experiments. Compared to other available libraries, MushroomRL has been created with the purpose of providing a com- prehensive and exible framework to minimize the effort in implementing and testing novel RL methodologies. The architecture of MushroomRL is built in such a way that every component of a typical RL experiment is already provided, and most of the time users can only focus on the implementation of their own algorithms. MushroomRL is accom- panied by a benchmarking suite collecting experimental results of state-of-the-art deep RL algorithms, and allowing to benchmark new ones. The result is... (More)
MushroomRL is an open-source Python library developed to simplify the process of im- plementing and running Reinforcement Learning (RL) experiments. Compared to other available libraries, MushroomRL has been created with the purpose of providing a com- prehensive and exible framework to minimize the effort in implementing and testing novel RL methodologies. The architecture of MushroomRL is built in such a way that every component of a typical RL experiment is already provided, and most of the time users can only focus on the implementation of their own algorithms. MushroomRL is accom- panied by a benchmarking suite collecting experimental results of state-of-the-art deep RL algorithms, and allowing to benchmark new ones. The result is a library from which RL researchers can significantly benefit in the critical phase of the empirical analysis of their works. MushroomRL stable code, tutorials, and documentation can be found at https://github.com/MushroomRL/mushroom-rl.
(Less)
- author
- D'Eramo, Carlo
; Tateo, Davide
LU
; Bonarini, Andrea
; Restelli, Marcello
and Peters, Jan
- publishing date
- 2021-06-01
- type
- Contribution to journal
- publication status
- published
- subject
- keywords
- Benchmarking, Open-source, Python, Reinforcement learning
- in
- Journal of Machine Learning Research
- volume
- 22
- article number
- A2
- publisher
- Microtome Publishing
- external identifiers
-
- scopus:85112427754
- ISSN
- 1532-4435
- language
- English
- LU publication?
- no
- additional info
- Publisher Copyright: © 2021 Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli and Jan Peters.
- id
- 0bb352ef-da49-4eb5-8a63-a3dfa918758d
- date added to LUP
- 2025-10-16 14:38:42
- date last changed
- 2025-10-22 10:21:03
@article{0bb352ef-da49-4eb5-8a63-a3dfa918758d,
abstract = {{<p>MushroomRL is an open-source Python library developed to simplify the process of im- plementing and running Reinforcement Learning (RL) experiments. Compared to other available libraries, MushroomRL has been created with the purpose of providing a com- prehensive and exible framework to minimize the effort in implementing and testing novel RL methodologies. The architecture of MushroomRL is built in such a way that every component of a typical RL experiment is already provided, and most of the time users can only focus on the implementation of their own algorithms. MushroomRL is accom- panied by a benchmarking suite collecting experimental results of state-of-the-art deep RL algorithms, and allowing to benchmark new ones. The result is a library from which RL researchers can significantly benefit in the critical phase of the empirical analysis of their works. MushroomRL stable code, tutorials, and documentation can be found at https://github.com/MushroomRL/mushroom-rl.</p>}},
author = {{D'Eramo, Carlo and Tateo, Davide and Bonarini, Andrea and Restelli, Marcello and Peters, Jan}},
issn = {{1532-4435}},
keywords = {{Benchmarking; Open-source; Python; Reinforcement learning}},
language = {{eng}},
month = {{06}},
publisher = {{Microtome Publishing}},
series = {{Journal of Machine Learning Research}},
title = {{Mushroomrl : simplifying reinforcement learning research}},
volume = {{22}},
year = {{2021}},
}