Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Three and Two Dimensions Data Fusion Based Panoramic Environment Perception for Space Modeling

Zhang, Zilin ; Hong, Haodong and Wang, Xian (2020) 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2019
Abstract

This paper presents a systematic approach for 3-dimensional (3d) scene modeling based on Time-of-Flight (TOF) technology. Firstly the panoramic point cloud of scene model is built. To achieve that, point cloud data captured in different visual angles are transformed and registered into a panoramic 3d coordinate system. Based on the matching characteristics of feature points in various point clouds, these data can be synthesized into a fusion point cloud. Then the synthetic cloud is divided into several groups of point cloud fragments based on the depth scaling parameter. Secondly the 2-dimensional (2d) grayscale images which match the original point cloud data are collected and processed to generate a panoramic 2d image. Through... (More)

This paper presents a systematic approach for 3-dimensional (3d) scene modeling based on Time-of-Flight (TOF) technology. Firstly the panoramic point cloud of scene model is built. To achieve that, point cloud data captured in different visual angles are transformed and registered into a panoramic 3d coordinate system. Based on the matching characteristics of feature points in various point clouds, these data can be synthesized into a fusion point cloud. Then the synthetic cloud is divided into several groups of point cloud fragments based on the depth scaling parameter. Secondly the 2-dimensional (2d) grayscale images which match the original point cloud data are collected and processed to generate a panoramic 2d image. Through Speeded-Up Robust Features (SURF) method, the similar feature pixels of different images are selected and disposed to fuse panoramic image. A novel 2d fusion image optimization method based on the pixel weighted theory is proposed in this paper. Finally the point cloud fragments segmented in the first step are classified according to different depth scaling values. The specific fragment is projected in 2d pixel space, and be modified as mask pattern. Therefor the objects of interest in the environment can be perceived by applying corresponding mark patterns to process panoramic image. The methodology of this environment perception approach can be the reference to TOF technology related 3d obstacles identification research.

(Less)
Please use this url to cite or link to this publication:
author
; and
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
Depth scaling parameter, Panoramic mosaic optimization, Point cloud registration, SURF algorithm
host publication
Proceedings - 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2019
editor
Li, Qingli and Wang, Lipo
article number
8965675
publisher
IEEE - Institute of Electrical and Electronics Engineers Inc.
conference name
12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2019
conference location
Huaqiao, China
conference dates
2019-10-19 - 2019-10-21
external identifiers
  • scopus:85079185517
ISBN
978-1-7281-4853-3
9781728148526
DOI
10.1109/CISP-BMEI48845.2019.8965675
language
English
LU publication?
no
id
222b9734-3660-4756-9ff9-01a5b062e6cc
date added to LUP
2020-02-26 10:04:28
date last changed
2024-01-16 22:13:43
@inproceedings{222b9734-3660-4756-9ff9-01a5b062e6cc,
  abstract     = {{<p>This paper presents a systematic approach for 3-dimensional (3d) scene modeling based on Time-of-Flight (TOF) technology. Firstly the panoramic point cloud of scene model is built. To achieve that, point cloud data captured in different visual angles are transformed and registered into a panoramic 3d coordinate system. Based on the matching characteristics of feature points in various point clouds, these data can be synthesized into a fusion point cloud. Then the synthetic cloud is divided into several groups of point cloud fragments based on the depth scaling parameter. Secondly the 2-dimensional (2d) grayscale images which match the original point cloud data are collected and processed to generate a panoramic 2d image. Through Speeded-Up Robust Features (SURF) method, the similar feature pixels of different images are selected and disposed to fuse panoramic image. A novel 2d fusion image optimization method based on the pixel weighted theory is proposed in this paper. Finally the point cloud fragments segmented in the first step are classified according to different depth scaling values. The specific fragment is projected in 2d pixel space, and be modified as mask pattern. Therefor the objects of interest in the environment can be perceived by applying corresponding mark patterns to process panoramic image. The methodology of this environment perception approach can be the reference to TOF technology related 3d obstacles identification research.</p>}},
  author       = {{Zhang, Zilin and Hong, Haodong and Wang, Xian}},
  booktitle    = {{Proceedings - 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2019}},
  editor       = {{Li, Qingli and Wang, Lipo}},
  isbn         = {{978-1-7281-4853-3}},
  keywords     = {{Depth scaling parameter; Panoramic mosaic optimization; Point cloud registration; SURF algorithm}},
  language     = {{eng}},
  month        = {{01}},
  publisher    = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}},
  title        = {{Three and Two Dimensions Data Fusion Based Panoramic Environment Perception for Space Modeling}},
  url          = {{http://dx.doi.org/10.1109/CISP-BMEI48845.2019.8965675}},
  doi          = {{10.1109/CISP-BMEI48845.2019.8965675}},
  year         = {{2020}},
}