Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Visual Odometry for Indoor Mobile Robot by Recognizing Local Manhattan Structures

Hou, Zhixing ; Ding, Yaqing LU ; Wang, Ying ; Yang, Hang and Kong, Hui (2019) 14th Asian Conference on Computer Vision (ACCV 2018) In Lecture Notes in Computer Science 11365.
Abstract
In this paper, we propose a novel 3-DOF visual odometry method to estimate the location and pose (yaw) of a mobile robot when the robot is navigating indoors. Particularly, we mainly aim at dealing with the corridor-like scenarios where the RGB-D camera mounted on the robot can capture apparent planar structures such as floor or walls. The novelties of our method lie in two-folds. First, to fully exploit the planar structures for odometry estimation, we propose a fast plane segmentation scheme based on efficiently extracted inverse-depth induced histograms. This training-free scheme can extract dominant planar structures by only exploiting the depth image of the RGB-D camera. Second, we regard the global indoor scene as a composition of... (More)
In this paper, we propose a novel 3-DOF visual odometry method to estimate the location and pose (yaw) of a mobile robot when the robot is navigating indoors. Particularly, we mainly aim at dealing with the corridor-like scenarios where the RGB-D camera mounted on the robot can capture apparent planar structures such as floor or walls. The novelties of our method lie in two-folds. First, to fully exploit the planar structures for odometry estimation, we propose a fast plane segmentation scheme based on efficiently extracted inverse-depth induced histograms. This training-free scheme can extract dominant planar structures by only exploiting the depth image of the RGB-D camera. Second, we regard the global indoor scene as a composition of some local Manhattan-like structures. At any specific location, we recognize at least one local Manhattan coordinate frame based on the detected planar structures. Pose estimation is realized based on the alignment of the camera coordinate frame to one dominant local Manhattan coordinate frame. Knowing pose information, the location estimation is carried out by a combination of a one-point RANSAC method and the ICP algorithm depending on the number of point matches available. We evaluate our work extensively on real-world data, the experimental result shows the promising performance in term of accuracy and robustness.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; and
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Computer Vision – ACCV 2018 : 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part V - 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part V
series title
Lecture Notes in Computer Science
volume
11365
publisher
Springer
conference name
14th Asian Conference on Computer Vision (ACCV 2018)
conference location
Perth, Australia
conference dates
2018-12-02 - 2018-12-06
external identifiers
  • scopus:85066812068
ISSN
1611-3349
0302-9743
ISBN
978-3-030-20872-1
978-3-030-20873-8
DOI
10.1007/978-3-030-20873-8_11
language
English
LU publication?
no
id
7405262b-e872-47b0-8fec-60810679dd81
date added to LUP
2022-09-09 11:00:26
date last changed
2024-05-02 05:49:02
@inbook{7405262b-e872-47b0-8fec-60810679dd81,
  abstract     = {{In this paper, we propose a novel 3-DOF visual odometry method to estimate the location and pose (yaw) of a mobile robot when the robot is navigating indoors. Particularly, we mainly aim at dealing with the corridor-like scenarios where the RGB-D camera mounted on the robot can capture apparent planar structures such as floor or walls. The novelties of our method lie in two-folds. First, to fully exploit the planar structures for odometry estimation, we propose a fast plane segmentation scheme based on efficiently extracted inverse-depth induced histograms. This training-free scheme can extract dominant planar structures by only exploiting the depth image of the RGB-D camera. Second, we regard the global indoor scene as a composition of some local Manhattan-like structures. At any specific location, we recognize at least one local Manhattan coordinate frame based on the detected planar structures. Pose estimation is realized based on the alignment of the camera coordinate frame to one dominant local Manhattan coordinate frame. Knowing pose information, the location estimation is carried out by a combination of a one-point RANSAC method and the ICP algorithm depending on the number of point matches available. We evaluate our work extensively on real-world data, the experimental result shows the promising performance in term of accuracy and robustness.<br/><br/>}},
  author       = {{Hou, Zhixing and Ding, Yaqing and Wang, Ying and Yang, Hang and Kong, Hui}},
  booktitle    = {{Computer Vision – ACCV 2018 : 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part V}},
  isbn         = {{978-3-030-20872-1}},
  issn         = {{1611-3349}},
  language     = {{eng}},
  month        = {{05}},
  publisher    = {{Springer}},
  series       = {{Lecture Notes in Computer Science}},
  title        = {{Visual Odometry for Indoor Mobile Robot by Recognizing Local Manhattan Structures}},
  url          = {{http://dx.doi.org/10.1007/978-3-030-20873-8_11}},
  doi          = {{10.1007/978-3-030-20873-8_11}},
  volume       = {{11365}},
  year         = {{2019}},
}