Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Outline of a sensory-motor perspective on intrinsically moral agents

Balkenius, Christian LU orcid ; Cañamero, Lola ; Pärnamets, Philip ; Johansson, Birger LU orcid ; Butz, Martin and Olsson, Andreas (2016) In Adaptive Behavior 24(5). p.306-319
Abstract
We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a... (More)
We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame (Less)
Please use this url to cite or link to this publication:
author
; ; ; ; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Autonomous robots, embodied emotions, sensory-motor grounding, embodied interaction, empathy, intrinsic morality
in
Adaptive Behavior
volume
24
issue
5
pages
14 pages
publisher
SAGE Publications
external identifiers
  • scopus:84994176861
  • wos:000386958600004
ISSN
1741-2633
DOI
10.1177/1059712316667203
project
Modelling Cognitive Development in Robots
Ikaros: An infrastructure for system level modelling of the brain
language
English
LU publication?
yes
id
87569bc1-74c8-4575-9120-74ae9aa38e6c
date added to LUP
2016-11-08 21:03:36
date last changed
2024-01-19 12:49:08
@article{87569bc1-74c8-4575-9120-74ae9aa38e6c,
  abstract     = {{We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame}},
  author       = {{Balkenius, Christian and Cañamero, Lola and Pärnamets, Philip and Johansson, Birger and Butz, Martin and Olsson, Andreas}},
  issn         = {{1741-2633}},
  keywords     = {{Autonomous robots; embodied emotions; sensory-motor grounding; embodied interaction; empathy; intrinsic morality}},
  language     = {{eng}},
  month        = {{11}},
  number       = {{5}},
  pages        = {{306--319}},
  publisher    = {{SAGE Publications}},
  series       = {{Adaptive Behavior}},
  title        = {{Outline of a sensory-motor perspective on intrinsically moral agents}},
  url          = {{http://dx.doi.org/10.1177/1059712316667203}},
  doi          = {{10.1177/1059712316667203}},
  volume       = {{24}},
  year         = {{2016}},
}