Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Embodied Visual Active Learning for Semantic Segmentation

Nilsson, David LU ; Pirinen, Aleksis LU ; Gärtner, Erik LU orcid and Sminchisescu, Cristian LU (2021) 35th AAAI Conference on Artificial Intelligence, AAAI 2021 p.2373-2383
Abstract

We study the task of embodied visual active learning, where an agent is set to explore a 3d environment with the goal to acquire visual scene understanding by actively selecting views for which to request annotation. While accurate on some benchmarks, today's deep visual recognition pipelines tend to not generalize well in certain real-world scenarios, or for unusual viewpoints. Robotic perception, in turn, requires the capability to refine the recognition capabilities for the conditions where the mobile system operates, including cluttered indoor environments or poor illumination. This motivates the proposed task, where an agent is placed in a novel environment with the objective of improving its visual recognition capability. To study... (More)

We study the task of embodied visual active learning, where an agent is set to explore a 3d environment with the goal to acquire visual scene understanding by actively selecting views for which to request annotation. While accurate on some benchmarks, today's deep visual recognition pipelines tend to not generalize well in certain real-world scenarios, or for unusual viewpoints. Robotic perception, in turn, requires the capability to refine the recognition capabilities for the conditions where the mobile system operates, including cluttered indoor environments or poor illumination. This motivates the proposed task, where an agent is placed in a novel environment with the objective of improving its visual recognition capability. To study embodied visual active learning, we develop a battery of agents - both learnt and pre-specified - and with different levels of knowledge of the environment. The agents are equipped with a semantic segmentation network and seek to acquire informative views, move and explore in order to propagate annotations in the neighbourhood of those views, then refine the underlying segmentation network by online retraining. The trainable method uses deep reinforcement learning with a reward function that balances two competing objectives: task performance, represented as visual recognition accuracy, which requires exploring the environment, and the necessary amount of annotated data requested during active exploration. We extensively evaluate the proposed models using the photorealistic Matterport3D simulator and show that a fully learnt method outperforms comparable pre-specified counterparts, even when requesting fewer annotations.

(Less)
Please use this url to cite or link to this publication:
author
; ; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Proceedings of the AAAI Conference on Artificial Intelligence
pages
11 pages
publisher
The Association for the Advancement of Artificial Intelligence
conference name
35th AAAI Conference on Artificial Intelligence, AAAI 2021
conference location
Virtual, Online
conference dates
2021-02-02 - 2021-02-09
external identifiers
  • scopus:85118705340
ISBN
9781713835974
DOI
10.1609/aaai.v35i3.16338
language
English
LU publication?
yes
id
f92d6fb2-e437-4eb8-838f-dd6d947e1ce5
date added to LUP
2022-05-06 10:47:46
date last changed
2022-12-06 18:57:44
@inproceedings{f92d6fb2-e437-4eb8-838f-dd6d947e1ce5,
  abstract     = {{<p>We study the task of embodied visual active learning, where an agent is set to explore a 3d environment with the goal to acquire visual scene understanding by actively selecting views for which to request annotation. While accurate on some benchmarks, today's deep visual recognition pipelines tend to not generalize well in certain real-world scenarios, or for unusual viewpoints. Robotic perception, in turn, requires the capability to refine the recognition capabilities for the conditions where the mobile system operates, including cluttered indoor environments or poor illumination. This motivates the proposed task, where an agent is placed in a novel environment with the objective of improving its visual recognition capability. To study embodied visual active learning, we develop a battery of agents - both learnt and pre-specified - and with different levels of knowledge of the environment. The agents are equipped with a semantic segmentation network and seek to acquire informative views, move and explore in order to propagate annotations in the neighbourhood of those views, then refine the underlying segmentation network by online retraining. The trainable method uses deep reinforcement learning with a reward function that balances two competing objectives: task performance, represented as visual recognition accuracy, which requires exploring the environment, and the necessary amount of annotated data requested during active exploration. We extensively evaluate the proposed models using the photorealistic Matterport3D simulator and show that a fully learnt method outperforms comparable pre-specified counterparts, even when requesting fewer annotations.</p>}},
  author       = {{Nilsson, David and Pirinen, Aleksis and Gärtner, Erik and Sminchisescu, Cristian}},
  booktitle    = {{Proceedings of the AAAI Conference on Artificial Intelligence}},
  isbn         = {{9781713835974}},
  language     = {{eng}},
  pages        = {{2373--2383}},
  publisher    = {{The Association for the Advancement of Artificial Intelligence}},
  title        = {{Embodied Visual Active Learning for Semantic Segmentation}},
  url          = {{http://dx.doi.org/10.1609/aaai.v35i3.16338}},
  doi          = {{10.1609/aaai.v35i3.16338}},
  year         = {{2021}},
}