The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization

Yaman, Ilayda; Tian, Guoda; Larsson, Martin; Persson, Patrik, et al. (2024-08-08). The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization 2024 IEEE International Conference on Robotics and Automation (ICRA), 11920 - 11926. 2024 IEEE International Conference on Robotics and Automation, ICRA 2024. Yokohama, Japan: IEEE - Institute of Electrical and Electronics Engineers Inc.
Download:
DOI:
Conference Proceeding/Paper | Published | English
Authors:
Yaman, Ilayda ; Tian, Guoda ; Larsson, Martin ; Persson, Patrik , et al.
Department:
Integrated Electronic Systems
LTH Profile Area: AI and Digitalization
ELLIIT: the Linköping-Lund initiative on IT and mobile communication
Communications Engineering
Computer Vision and Machine Learning
Mathematics (Faculty of Engineering)
Robotics and Semantic Systems
LU Profile Area: Natural and Artificial Cognition
eSSENCE: The e-Science Collaboration
Lund University Humanities Lab
LU Profile Area: Light and Materials
LTH Profile Area: Engineering Health
Stroke Imaging Research group
Mathematical Imaging Group
Department of Electrical and Information Technology
Embedded Electronics Engineering (M.Sc.)
Research Group:
Computer Vision and Machine Learning
Stroke Imaging Research group
Mathematical Imaging Group
Abstract:
We present a synchronized multisensory dataset for accurate and robust indoor localization: the Lund University Vision, Radio, and Audio (LuViRA) Dataset. The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment, audio recorded by 12 microphones, and accurate six degrees of freedom (6DOF) pose ground truth of 0.5 mm. We synchronize these sensors to ensure that all data is recorded simultaneously. A camera, speaker, and transmit antenna are placed on top of a slowly moving service robot, and 89 trajectories are recorded. Each trajectory includes 20 to 50 seconds of recorded sensor data and ground truth labels. Data from different sensors can be used separately or jointly to perform localization tasks, and data from the motion capture (mocap) system is used to verify the results obtained by the localization algorithms. The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks. Moreover, the full dataset or some parts of it can also be used for other research areas such as channel estimation, image classification, etc. Our dataset is available at: https://github.com/ilaydayaman/LuViRA_Dataset
ISBN:
979-8-3503-8457-4
LUP-ID:
9fe9290f-113f-4d49-9f9f-bbd314c2a768 | Link: https://lup.lub.lu.se/record/9fe9290f-113f-4d49-9f9f-bbd314c2a768 | Statistics

Cite this