Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Fine-tuning Myoelectric Control through Reinforcement Learning in a Game Environment

Freitag, Kilian ; Karayiannidis, Yiannis LU orcid ; Zbinden, Jan and Laezza, Rita (2025) In IEEE Transactions on Biomedical Engineering
Abstract

Objective: Enhancing the reliability of myoelectric controllers that decode motor intent is a pressing challenge in the field of bionic prosthetics. State-of-the-art research has mostly focused on Supervised Learning (SL) techniques to tackle this problem. However, obtaining high-quality labeled data that accurately represents muscle activity during daily usage remains difficult. We investigate the potential of Reinforcement Learning (RL) to further improve the decoding of human motion intent by incorporating usage-based data. Methods: The starting point of our method is a SL control policy, pretrained on a static recording of electromyographic (EMG) ground truth data. We then apply RL to fine-tune the pretrained classifier with dynamic... (More)

Objective: Enhancing the reliability of myoelectric controllers that decode motor intent is a pressing challenge in the field of bionic prosthetics. State-of-the-art research has mostly focused on Supervised Learning (SL) techniques to tackle this problem. However, obtaining high-quality labeled data that accurately represents muscle activity during daily usage remains difficult. We investigate the potential of Reinforcement Learning (RL) to further improve the decoding of human motion intent by incorporating usage-based data. Methods: The starting point of our method is a SL control policy, pretrained on a static recording of electromyographic (EMG) ground truth data. We then apply RL to fine-tune the pretrained classifier with dynamic EMG data obtained during interaction with a game environment developed for this work. We conducted real-time experiments to evaluate our approach and achieved significant improvements in human-in-the-loop performance. Results: The method effectively predicts simultaneous finger movements, leading to a two-fold increase in decoding accuracy during gameplay and a 39% improvement in a separate motion test. Conclusion: By employing RL and incorporating usage-based EMG data during fine-tuning, our method achieves significant improvements in accuracy and robustness. Significance: These results showcase the potential of RL for enhancing the reliability of myoelectric controllers, which is of particular importance for advanced bionic limbs. See our project page for visual demonstrations: https://sites.google.com/view/bionic-limb-rl.

(Less)
Please use this url to cite or link to this publication:
author
; ; and
organization
publishing date
type
Contribution to journal
publication status
in press
subject
keywords
Deep Learning, Electromyography, Human computer interaction, Prosthetic limbs, Reinforcement learning
in
IEEE Transactions on Biomedical Engineering
publisher
IEEE - Institute of Electrical and Electronics Engineers Inc.
external identifiers
  • pmid:40498601
  • scopus:105008094821
ISSN
0018-9294
DOI
10.1109/TBME.2025.3578855
project
ELLIIT B14: Autonomous Force-Aware Swift Motion Control
language
English
LU publication?
yes
additional info
Publisher Copyright: © 2025 IEEE.
id
375a6a11-2dab-4601-9be1-6dddc7f5f19e
date added to LUP
2025-10-19 18:31:10
date last changed
2025-10-24 13:42:26
@article{375a6a11-2dab-4601-9be1-6dddc7f5f19e,
  abstract     = {{<p>Objective: Enhancing the reliability of myoelectric controllers that decode motor intent is a pressing challenge in the field of bionic prosthetics. State-of-the-art research has mostly focused on Supervised Learning (SL) techniques to tackle this problem. However, obtaining high-quality labeled data that accurately represents muscle activity during daily usage remains difficult. We investigate the potential of Reinforcement Learning (RL) to further improve the decoding of human motion intent by incorporating usage-based data. Methods: The starting point of our method is a SL control policy, pretrained on a static recording of electromyographic (EMG) ground truth data. We then apply RL to fine-tune the pretrained classifier with dynamic EMG data obtained during interaction with a game environment developed for this work. We conducted real-time experiments to evaluate our approach and achieved significant improvements in human-in-the-loop performance. Results: The method effectively predicts simultaneous finger movements, leading to a two-fold increase in decoding accuracy during gameplay and a 39% improvement in a separate motion test. Conclusion: By employing RL and incorporating usage-based EMG data during fine-tuning, our method achieves significant improvements in accuracy and robustness. Significance: These results showcase the potential of RL for enhancing the reliability of myoelectric controllers, which is of particular importance for advanced bionic limbs. See our project page for visual demonstrations: https://sites.google.com/view/bionic-limb-rl.</p>}},
  author       = {{Freitag, Kilian and Karayiannidis, Yiannis and Zbinden, Jan and Laezza, Rita}},
  issn         = {{0018-9294}},
  keywords     = {{Deep Learning; Electromyography; Human computer interaction; Prosthetic limbs; Reinforcement learning}},
  language     = {{eng}},
  publisher    = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}},
  series       = {{IEEE Transactions on Biomedical Engineering}},
  title        = {{Fine-tuning Myoelectric Control through Reinforcement Learning in a Game Environment}},
  url          = {{http://dx.doi.org/10.1109/TBME.2025.3578855}},
  doi          = {{10.1109/TBME.2025.3578855}},
  year         = {{2025}},
}