Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Tracking in action space

Herzog, Dennis L. and Krüger, Volker LU orcid (2012) 11th European Conference on Computer Vision, ECCV 2010 In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6553. p.100-113
Abstract

The recognition of human actions such as pointing at objects ("Give me that...") is difficult because they ought to be recognized independent of scene parameters such as viewing direction. Furthermore, the parameters of the action, such as pointing direction, are important pieces of information. One common way to achieve recognition is by using 3D human body tracking followed by action recognition based on the captured tracking data. General 3D body tracking is, however, still a difficult problem. In this paper, we are looking at human body tracking for action recognition from a context-driven perspective. Instead of the space of human body poses, we consider the space of possible actions of a given context and argue that 3D body... (More)

The recognition of human actions such as pointing at objects ("Give me that...") is difficult because they ought to be recognized independent of scene parameters such as viewing direction. Furthermore, the parameters of the action, such as pointing direction, are important pieces of information. One common way to achieve recognition is by using 3D human body tracking followed by action recognition based on the captured tracking data. General 3D body tracking is, however, still a difficult problem. In this paper, we are looking at human body tracking for action recognition from a context-driven perspective. Instead of the space of human body poses, we consider the space of possible actions of a given context and argue that 3D body tracking reduces to action tracking in the parameter space in which the actions live. This reduces the high-dimensional problem to a low-dimensional one. In our approach, we use parametric hidden Markov models to represent parametric movements; particle filtering is used to track in the space of action parameters. Our approach is content with monocular video data and we demonstrate its effectiveness on synthetic and on real image sequences. In the experiments we focus on human arm movements.

(Less)
Please use this url to cite or link to this publication:
author
and
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
host publication
Trends and Topics in Computer Vision. ECCV 2010.
series title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
editor
Kutulakos, K.N
volume
6553
edition
PART 1
pages
14 pages
conference name
11th European Conference on Computer Vision, ECCV 2010
conference location
Heraklion, Crete, Greece
conference dates
2010-09-10 - 2010-09-11
external identifiers
  • scopus:84871172684
ISSN
0302-9743
ISBN
978-3-642-35749-7
978-3-642-35748-0
DOI
10.1007/978-3-642-35749-7_8
language
English
LU publication?
no
id
91c8ee22-5191-414a-b7fa-ab61a344d4a5
date added to LUP
2019-06-28 09:20:48
date last changed
2024-01-01 13:56:39
@inbook{91c8ee22-5191-414a-b7fa-ab61a344d4a5,
  abstract     = {{<p>The recognition of human actions such as pointing at objects ("Give me that...") is difficult because they ought to be recognized independent of scene parameters such as viewing direction. Furthermore, the parameters of the action, such as pointing direction, are important pieces of information. One common way to achieve recognition is by using 3D human body tracking followed by action recognition based on the captured tracking data. General 3D body tracking is, however, still a difficult problem. In this paper, we are looking at human body tracking for action recognition from a context-driven perspective. Instead of the space of human body poses, we consider the space of possible actions of a given context and argue that 3D body tracking reduces to action tracking in the parameter space in which the actions live. This reduces the high-dimensional problem to a low-dimensional one. In our approach, we use parametric hidden Markov models to represent parametric movements; particle filtering is used to track in the space of action parameters. Our approach is content with monocular video data and we demonstrate its effectiveness on synthetic and on real image sequences. In the experiments we focus on human arm movements.</p>}},
  author       = {{Herzog, Dennis L. and Krüger, Volker}},
  booktitle    = {{Trends and Topics in Computer Vision. ECCV 2010.}},
  editor       = {{Kutulakos, K.N}},
  isbn         = {{978-3-642-35749-7}},
  issn         = {{0302-9743}},
  language     = {{eng}},
  pages        = {{100--113}},
  series       = {{Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}},
  title        = {{Tracking in action space}},
  url          = {{http://dx.doi.org/10.1007/978-3-642-35749-7_8}},
  doi          = {{10.1007/978-3-642-35749-7_8}},
  volume       = {{6553}},
  year         = {{2012}},
}