Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Adaptive interface for mapping body movements to sounds

Marković, Dimitrije and Malešević, Nebojša LU (2018) 7th International Conference on Computational Intelligence in Music, Sound, Art and Design, EvoMUSART 2018 In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10783 LNCS. p.194-205
Abstract

Contemporary digital musical instruments allow an abundance of means to generate sound. Although superior to traditional instruments in terms of producing a unique audio-visual act, there is still an unmet need for digital instruments that allow performers to generate sounds through movements in an intuitive manner. One of the key factors for an authentic digital music act is a low latency between movements (user commands) and corresponding sounds. Here we present such a low-latency interface that maps the user’s kinematic actions into sound samples. The interface relies on wireless sensor nodes equipped with inertial measurement units and a real-time algorithm dedicated to the early detection and classification of a variety of... (More)

Contemporary digital musical instruments allow an abundance of means to generate sound. Although superior to traditional instruments in terms of producing a unique audio-visual act, there is still an unmet need for digital instruments that allow performers to generate sounds through movements in an intuitive manner. One of the key factors for an authentic digital music act is a low latency between movements (user commands) and corresponding sounds. Here we present such a low-latency interface that maps the user’s kinematic actions into sound samples. The interface relies on wireless sensor nodes equipped with inertial measurement units and a real-time algorithm dedicated to the early detection and classification of a variety of movements/gestures performed by a user. The core algorithm is based on the approximate inference of a hierarchical generative model with piecewise-linear dynamical components. Importantly, the model’s structure is derived from a set of motion gestures. The performance of the Bayesian algorithm was compared against the k-nearest neighbors (k-NN) algorithm, which showed the highest classification accuracy, in a pre-testing phase, among several existing state-of-the-art algorithms. In almost all of the evaluation metrics the proposed probabilistic algorithm outperformed the k-NN algorithm.

(Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Computational Intelligence in Music, Sound, Art and Design - 7th International Conference, EvoMUSART 2018, Proceedings
series title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
volume
10783 LNCS
pages
12 pages
publisher
Springer
conference name
7th International Conference on Computational Intelligence in Music, Sound, Art and Design, EvoMUSART 2018
conference location
Parma, Italy
conference dates
2018-04-04 - 2018-04-06
external identifiers
  • scopus:85044664926
ISSN
0302-9743
1611-3349
ISBN
9783319775821
DOI
10.1007/978-3-319-77583-8_13
language
English
LU publication?
yes
id
aed11846-36cf-47a0-b41f-15d6aa26ceaa
date added to LUP
2018-04-12 13:24:00
date last changed
2024-06-24 12:59:47
@inproceedings{aed11846-36cf-47a0-b41f-15d6aa26ceaa,
  abstract     = {{<p>Contemporary digital musical instruments allow an abundance of means to generate sound. Although superior to traditional instruments in terms of producing a unique audio-visual act, there is still an unmet need for digital instruments that allow performers to generate sounds through movements in an intuitive manner. One of the key factors for an authentic digital music act is a low latency between movements (user commands) and corresponding sounds. Here we present such a low-latency interface that maps the user’s kinematic actions into sound samples. The interface relies on wireless sensor nodes equipped with inertial measurement units and a real-time algorithm dedicated to the early detection and classification of a variety of movements/gestures performed by a user. The core algorithm is based on the approximate inference of a hierarchical generative model with piecewise-linear dynamical components. Importantly, the model’s structure is derived from a set of motion gestures. The performance of the Bayesian algorithm was compared against the k-nearest neighbors (k-NN) algorithm, which showed the highest classification accuracy, in a pre-testing phase, among several existing state-of-the-art algorithms. In almost all of the evaluation metrics the proposed probabilistic algorithm outperformed the k-NN algorithm.</p>}},
  author       = {{Marković, Dimitrije and Malešević, Nebojša}},
  booktitle    = {{Computational Intelligence in Music, Sound, Art and Design - 7th International Conference, EvoMUSART 2018, Proceedings}},
  isbn         = {{9783319775821}},
  issn         = {{0302-9743}},
  language     = {{eng}},
  month        = {{01}},
  pages        = {{194--205}},
  publisher    = {{Springer}},
  series       = {{Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}},
  title        = {{Adaptive interface for mapping body movements to sounds}},
  url          = {{http://dx.doi.org/10.1007/978-3-319-77583-8_13}},
  doi          = {{10.1007/978-3-319-77583-8_13}},
  volume       = {{10783 LNCS}},
  year         = {{2018}},
}