Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization

Yaman, Ilayda LU ; Tian, Guoda LU ; Larsson, Martin LU orcid ; Persson, Patrik LU orcid ; Sandra, Michiel LU ; Dürr, Alexander LU orcid ; Tegler, Erik LU ; Challa, Nikhil ; Garde, Henrik LU and Tufvesson, Fredrik LU orcid , et al. (2024) 2024 IEEE International Conference on Robotics and Automation, ICRA 2024
Abstract
We present a synchronized multisensory dataset for accurate and robust indoor localization: the Lund University Vision, Radio, and Audio (LuViRA) Dataset. The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment, audio recorded by 12 microphones, and accurate six degrees of freedom (6DOF) pose ground truth of 0.5 mm. We synchronize these sensors to ensure that all data is recorded simultaneously. A camera, speaker, and transmit antenna are placed on top of a slowly moving service robot, and 89 trajectories are recorded. Each trajectory includes 20 to 50 seconds of recorded sensor data and... (More)
We present a synchronized multisensory dataset for accurate and robust indoor localization: the Lund University Vision, Radio, and Audio (LuViRA) Dataset. The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment, audio recorded by 12 microphones, and accurate six degrees of freedom (6DOF) pose ground truth of 0.5 mm. We synchronize these sensors to ensure that all data is recorded simultaneously. A camera, speaker, and transmit antenna are placed on top of a slowly moving service robot, and 89 trajectories are recorded. Each trajectory includes 20 to 50 seconds of recorded sensor data and ground truth labels. Data from different sensors can be used separately or jointly to perform localization tasks, and data from the motion capture (mocap) system is used to verify the results obtained by the localization algorithms. The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks. Moreover, the full dataset or some parts of it can also be used for other research areas such as channel estimation, image classification, etc. Our dataset is available at: https://github.com/ilaydayaman/LuViRA_Dataset (Less)
Please use this url to cite or link to this publication:
author
; ; ; ; ; ; ; ; and , et al. (More)
; ; ; ; ; ; ; ; ; ; ; and (Less)
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
2024 IEEE International Conference on Robotics and Automation (ICRA)
publisher
IEEE - Institute of Electrical and Electronics Engineers Inc.
conference name
2024 IEEE International Conference on Robotics and Automation, ICRA 2024
conference location
Yokohama, Japan
conference dates
2024-05-13 - 2024-05-17
ISBN
979-8-3503-8457-4
DOI
10.1109/ICRA57147.2024.10610237
language
English
LU publication?
yes
id
9fe9290f-113f-4d49-9f9f-bbd314c2a768
date added to LUP
2024-09-07 07:46:15
date last changed
2024-09-09 11:12:52
@inproceedings{9fe9290f-113f-4d49-9f9f-bbd314c2a768,
  abstract     = {{We present a synchronized multisensory dataset for accurate and robust indoor localization: the Lund University Vision, Radio, and Audio (LuViRA) Dataset. The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment, audio recorded by 12 microphones, and accurate six degrees of freedom (6DOF) pose ground truth of 0.5 mm. We synchronize these sensors to ensure that all data is recorded simultaneously. A camera, speaker, and transmit antenna are placed on top of a slowly moving service robot, and 89 trajectories are recorded. Each trajectory includes 20 to 50 seconds of recorded sensor data and ground truth labels. Data from different sensors can be used separately or jointly to perform localization tasks, and data from the motion capture (mocap) system is used to verify the results obtained by the localization algorithms. The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks. Moreover, the full dataset or some parts of it can also be used for other research areas such as channel estimation, image classification, etc. Our dataset is available at: https://github.com/ilaydayaman/LuViRA_Dataset}},
  author       = {{Yaman, Ilayda and Tian, Guoda and Larsson, Martin and Persson, Patrik and Sandra, Michiel and Dürr, Alexander and Tegler, Erik and Challa, Nikhil and Garde, Henrik and Tufvesson, Fredrik and Åström, Kalle and Edfors, Ove and Liu, Liang}},
  booktitle    = {{2024 IEEE International Conference on Robotics and Automation (ICRA)}},
  isbn         = {{979-8-3503-8457-4}},
  language     = {{eng}},
  month        = {{08}},
  publisher    = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}},
  title        = {{The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization}},
  url          = {{http://dx.doi.org/10.1109/ICRA57147.2024.10610237}},
  doi          = {{10.1109/ICRA57147.2024.10610237}},
  year         = {{2024}},
}