Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Second-order constrained parametric proposals and sequential search-based structured prediction for semantic segmentation in RGB-D images

Banica, Dan and Sminchisescu, Cristian LU (2015) IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015 p.3517-3526
Abstract

We focus on the problem of semantic segmentation based on RGB-D data, with emphasis on analyzing cluttered indoor scenes containing many visual categories and instances. Our approach is based on a parametric figureground intensity and depth-constrained proposal process that generates spatial layout hypotheses at multiple locations and scales in the image followed by a sequential inference algorithm that produces a complete scene estimate. Our contributions can be summarized as follows: (1) a generalization of parametric max flow figure-ground proposal methodology to take advantage of intensity and depth information, in order to systematically and efficiently generate the breakpoints of an underlying spatial model in polynomial time, (2)... (More)

We focus on the problem of semantic segmentation based on RGB-D data, with emphasis on analyzing cluttered indoor scenes containing many visual categories and instances. Our approach is based on a parametric figureground intensity and depth-constrained proposal process that generates spatial layout hypotheses at multiple locations and scales in the image followed by a sequential inference algorithm that produces a complete scene estimate. Our contributions can be summarized as follows: (1) a generalization of parametric max flow figure-ground proposal methodology to take advantage of intensity and depth information, in order to systematically and efficiently generate the breakpoints of an underlying spatial model in polynomial time, (2) new region description methods based on second-order pooling over multiple features constructed using both intensity and depth channels, (3) a principled search-based structured prediction inference and learning process that resolves conflicts in overlapping spatial partitions and selects regions sequentially towards complete scene estimates, and (4) extensive evaluation of the impact of depth, as well as the effectiveness of a large number of descriptors, both pre-designed and automatically obtained using deep learning, in a difficult RGB-D semantic segmentation problem with 92 classes. We report state of the art results in the challenging NYU Depth Dataset V2 [44], extended for the RMRC 2013 and RMRC 2014 Indoor Segmentation Challenges, where currently the proposed model ranks first. Moreover, we show that by combining second-order and deep learning features, over 15% relative accuracy improvements can be additionally achieved. In a scene classification benchmark, our methodology further improves the state of the art by 24%.

(Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
article number
7298974
pages
10 pages
publisher
IEEE Computer Society
conference name
IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
conference location
Boston, United States
conference dates
2015-06-07 - 2015-06-12
external identifiers
  • scopus:84959216973
ISBN
9781467369640
DOI
10.1109/CVPR.2015.7298974
language
English
LU publication?
yes
id
14ee5cbd-f538-4609-8cce-4491f15ef696
date added to LUP
2016-09-23 07:58:13
date last changed
2022-04-24 17:48:49
@inproceedings{14ee5cbd-f538-4609-8cce-4491f15ef696,
  abstract     = {{<p>We focus on the problem of semantic segmentation based on RGB-D data, with emphasis on analyzing cluttered indoor scenes containing many visual categories and instances. Our approach is based on a parametric figureground intensity and depth-constrained proposal process that generates spatial layout hypotheses at multiple locations and scales in the image followed by a sequential inference algorithm that produces a complete scene estimate. Our contributions can be summarized as follows: (1) a generalization of parametric max flow figure-ground proposal methodology to take advantage of intensity and depth information, in order to systematically and efficiently generate the breakpoints of an underlying spatial model in polynomial time, (2) new region description methods based on second-order pooling over multiple features constructed using both intensity and depth channels, (3) a principled search-based structured prediction inference and learning process that resolves conflicts in overlapping spatial partitions and selects regions sequentially towards complete scene estimates, and (4) extensive evaluation of the impact of depth, as well as the effectiveness of a large number of descriptors, both pre-designed and automatically obtained using deep learning, in a difficult RGB-D semantic segmentation problem with 92 classes. We report state of the art results in the challenging NYU Depth Dataset V2 [44], extended for the RMRC 2013 and RMRC 2014 Indoor Segmentation Challenges, where currently the proposed model ranks first. Moreover, we show that by combining second-order and deep learning features, over 15% relative accuracy improvements can be additionally achieved. In a scene classification benchmark, our methodology further improves the state of the art by 24%.</p>}},
  author       = {{Banica, Dan and Sminchisescu, Cristian}},
  booktitle    = {{IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015}},
  isbn         = {{9781467369640}},
  language     = {{eng}},
  month        = {{10}},
  pages        = {{3517--3526}},
  publisher    = {{IEEE Computer Society}},
  title        = {{Second-order constrained parametric proposals and sequential search-based structured prediction for semantic segmentation in RGB-D images}},
  url          = {{http://dx.doi.org/10.1109/CVPR.2015.7298974}},
  doi          = {{10.1109/CVPR.2015.7298974}},
  year         = {{2015}},
}