Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

A model of how depth facilitates scene-relative object motion perception

Layton, Oliver W and Niehorster, D C LU orcid (2019) In PLoS Computational Biology 15(11).
Abstract

Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer's retina and radically influences an object's retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object-otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We... (More)

Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer's retina and radically influences an object's retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object-otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object's retinal motion, improving the accuracy of the object's movement direction represented by motion signals.

(Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Contribution to journal
publication status
published
subject
in
PLoS Computational Biology
volume
15
issue
11
article number
e1007397
publisher
Public Library of Science (PLoS)
external identifiers
  • pmid:31725723
  • scopus:85075813311
ISSN
1553-7358
DOI
10.1371/journal.pcbi.1007397
language
English
LU publication?
yes
id
c4c6cfaa-1189-45c1-85c1-ac87ac160fb3
date added to LUP
2019-11-22 00:58:35
date last changed
2024-06-26 06:39:41
@article{c4c6cfaa-1189-45c1-85c1-ac87ac160fb3,
  abstract     = {{<p>Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer's retina and radically influences an object's retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object-otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object's retinal motion, improving the accuracy of the object's movement direction represented by motion signals.</p>}},
  author       = {{Layton, Oliver W and Niehorster, D C}},
  issn         = {{1553-7358}},
  language     = {{eng}},
  month        = {{11}},
  number       = {{11}},
  publisher    = {{Public Library of Science (PLoS)}},
  series       = {{PLoS Computational Biology}},
  title        = {{A model of how depth facilitates scene-relative object motion perception}},
  url          = {{http://dx.doi.org/10.1371/journal.pcbi.1007397}},
  doi          = {{10.1371/journal.pcbi.1007397}},
  volume       = {{15}},
  year         = {{2019}},
}