Advanced

Real-Time Monocular SLAM System Using an Inertial Measurement Unit to Estimate the Metric Scale and Gravity Direction

Persson, Patrik LU (2018) In Master’s Theses in Mathematical Sciences FMAM05 20172
Mathematics (Faculty of Engineering)
Abstract
In this work a simultaneous localisation and mapping system, SLAM, has been constructed which can incorporate inertial measurement unit, IMU, data such as accelerometer and gyroscope data to estimate the metric scale and the relationship to the gravity vector of the SLAM solution. The system can perform accurate pose predictions at 100Hz based on IMU data and previous camera poses. An improved KLT-tracker has been devised that is less sensitive to rotations and which exploits predicted camera poses to limit the search region of features in a new image. The resulting system is lightweight enough to be able to be run on a Raspberry Pi 3. An UAV has also been constructed using 3D printing and CNC circuit milling to create the electronics. In... (More)
In this work a simultaneous localisation and mapping system, SLAM, has been constructed which can incorporate inertial measurement unit, IMU, data such as accelerometer and gyroscope data to estimate the metric scale and the relationship to the gravity vector of the SLAM solution. The system can perform accurate pose predictions at 100Hz based on IMU data and previous camera poses. An improved KLT-tracker has been devised that is less sensitive to rotations and which exploits predicted camera poses to limit the search region of features in a new image. The resulting system is lightweight enough to be able to be run on a Raspberry Pi 3. An UAV has also been constructed using 3D printing and CNC circuit milling to create the electronics. In addition to this, control algorithms for the UAV have been created. (Less)
Popular Abstract
Real-time SLAM system using an inertial measurement unit
Cameras are nowadays very powerful, cheap and easily accessible. This has lead to many new applications such as virtual reality, autonomous cars and drones. For these applications to work, a computer vision system is needed to convert sequences of 2D images to an understanding of the 3D world and the movement therein. The main focus of this thesis has been to design and implement such a system.
Using 2D images and motion data, the system can calculate the motion of a camera and the structure of its surrounding, in a metric scale. The resulting system is very lightweight and can successfully run on a mobile platform such as a raspberry pi with processing power to spare, while... (More)
Real-time SLAM system using an inertial measurement unit
Cameras are nowadays very powerful, cheap and easily accessible. This has lead to many new applications such as virtual reality, autonomous cars and drones. For these applications to work, a computer vision system is needed to convert sequences of 2D images to an understanding of the 3D world and the movement therein. The main focus of this thesis has been to design and implement such a system.
Using 2D images and motion data, the system can calculate the motion of a camera and the structure of its surrounding, in a metric scale. The resulting system is very lightweight and can successfully run on a mobile platform such as a raspberry pi with processing power to spare, while providing accurate pose estimates at a rate of 100 Hz.
This thesis addresses the problem of how to make the system lightweight enough to be able to run in real-time on a low-power device. The thesis also addresses the scale and orientation ambiguity inherent to computer vision systems by incorporating motion data from a motion sensor. In addition to this, algorithms using the motion data to make the system more robust were created.
Computer vision is needed in virtual reality applications. Here knowledge of the motion of the camera relative to structure in the environment is needed to render the augmented objects correctly. Computer vision is also needed in autonomous systems such as self-driving cars or self-flying drones, where a notion of position and self-motion relative to the surrounding is necessary to be able to navigate. Since the resulting system is capable of calculating camera motion and structure of the environment, it could be used for these purposes.
The system works by tracking distinct points across consecutive images. Based on how these points move in the images, the camera motion and the 3D location of the points can be calculated. Each time a camera pose is calculated it is combined with accelerometer and gyroscope data. The combination provides a more accurate estimate and also allows the system to estimate the scale in meters. Without the accelerometer data, the scale of the motion and structure would be unknown. As the environment is explored, sensor noise will cause the pose estimate to accumulate error. To decrease the rate of accumulated error, an adjustment algorithm is used at steady intervals. It considers several images at the same time to find the best estimates that support all the images. (Less)
Please use this url to cite or link to this publication:
author
Persson, Patrik LU
supervisor
organization
course
FMAM05 20172
year
type
H2 - Master's Degree (Two Years)
subject
keywords
Computer vision, SLAM, sensor fusion, autonomous drones
publication/series
Master’s Theses in Mathematical Sciences
report number
LUTFMA-3342-2018
ISSN
1404-6342
other publication id
2018:E11
language
English
id
8942150
date added to LUP
2018-05-29 18:19:07
date last changed
2018-05-29 18:19:07
@misc{8942150,
  abstract     = {In this work a simultaneous localisation and mapping system, SLAM, has been constructed which can incorporate inertial measurement unit, IMU, data such as accelerometer and gyroscope data to estimate the metric scale and the relationship to the gravity vector of the SLAM solution. The system can perform accurate pose predictions at 100Hz based on IMU data and previous camera poses. An improved KLT-tracker has been devised that is less sensitive to rotations and which exploits predicted camera poses to limit the search region of features in a new image. The resulting system is lightweight enough to be able to be run on a Raspberry Pi 3. An UAV has also been constructed using 3D printing and CNC circuit milling to create the electronics. In addition to this, control algorithms for the UAV have been created.},
  author       = {Persson, Patrik},
  issn         = {1404-6342},
  keyword      = {Computer vision,SLAM,sensor fusion,autonomous drones},
  language     = {eng},
  note         = {Student Paper},
  series       = {Master’s Theses in Mathematical Sciences},
  title        = {Real-Time Monocular SLAM System Using an Inertial Measurement Unit to Estimate the Metric Scale and Gravity Direction},
  year         = {2018},
}