Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Matrix backpropagation for deep networks with structured layers

Ionescu, Catalin ; Vantzos, Orestis and Sminchisescu, Cristian LU (2016) 15th IEEE International Conference on Computer Vision, ICCV 2015 11-18-December-2015. p.2965-2973
Abstract

Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric... (More)

Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. We perform segmentation experiments using the BSDS and MSCOCO benchmarks and demonstrate that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.

(Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Proceedings - 2015 IEEE International Conference on Computer Vision, ICCV 2015
volume
11-18-December-2015
article number
7410696
pages
9 pages
publisher
IEEE - Institute of Electrical and Electronics Engineers Inc.
conference name
15th IEEE International Conference on Computer Vision, ICCV 2015
conference location
Santiago, Chile
conference dates
2015-12-11 - 2015-12-18
external identifiers
  • wos:000380414100331
  • scopus:84973922889
ISBN
9781467383912
DOI
10.1109/ICCV.2015.339
language
English
LU publication?
yes
id
6cd0105e-ea6b-4c84-9026-e45487668188
date added to LUP
2017-02-13 14:21:31
date last changed
2024-04-29 04:54:24
@inproceedings{6cd0105e-ea6b-4c84-9026-e45487668188,
  abstract     = {{<p>Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. We perform segmentation experiments using the BSDS and MSCOCO benchmarks and demonstrate that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.</p>}},
  author       = {{Ionescu, Catalin and Vantzos, Orestis and Sminchisescu, Cristian}},
  booktitle    = {{Proceedings - 2015 IEEE International Conference on Computer Vision, ICCV 2015}},
  isbn         = {{9781467383912}},
  language     = {{eng}},
  month        = {{02}},
  pages        = {{2965--2973}},
  publisher    = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}},
  title        = {{Matrix backpropagation for deep networks with structured layers}},
  url          = {{http://dx.doi.org/10.1109/ICCV.2015.339}},
  doi          = {{10.1109/ICCV.2015.339}},
  volume       = {{11-18-December-2015}},
  year         = {{2016}},
}