Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Moral agency, moral responsibility, and artefacts

Parthemore, Joel LU and Whitby, Blay (2012) The machine question: AI,ethics and moral responsibility: AISB/IACAP World Congress 2012 p.8-17
Abstract
Abstract--- This paper follows directly from our forthcoming paper in International Journal of Machine Consciousness, where we discuss the requirements for an artefact to be a moral agent and conclude that the artefactual question is ultimately a red herring. As we did in the earlier paper, we take moral agency to be that condition in which an agent can, appropriately, be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context, and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness: who does the “I” who thinks “I” think that “I” is? It... (More)
Abstract--- This paper follows directly from our forthcoming paper in International Journal of Machine Consciousness, where we discuss the requirements for an artefact to be a moral agent and conclude that the artefactual question is ultimately a red herring. As we did in the earlier paper, we take moral agency to be that condition in which an agent can, appropriately, be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context, and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness: who does the “I” who thinks “I” think that “I” is? It must exhibit a range of highly sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: a “private” moral world is not enough. After reviewing these conditions and pouring cold water on a number of recent claims for having achieved “minimal” machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artefacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots. (Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
autopoiesis, consciousness, concepts, responsibility, moral stance, moral agency
host publication
[Host publication title missing]
editor
Gunkel, David ; Bryson, Joanna and Torrance, Steve
pages
9 pages
publisher
The Society for the Study of Artificial Intelligence and Simulation of Behaviour
conference name
The machine question: AI,ethics and moral responsibility: AISB/IACAP World Congress 2012
conference dates
2012-07-02
external identifiers
  • scopus:84893341494
project
Centre for Cognitive Semiotics (RJ)
language
English
LU publication?
yes
id
3e3a8945-b532-4a75-982d-c96964551035 (old id 3412151)
alternative location
http://events.cs.bham.ac.uk/turing12/proceedings/14.pdf
date added to LUP
2016-04-04 09:56:23
date last changed
2023-09-20 04:35:14
@inproceedings{3e3a8945-b532-4a75-982d-c96964551035,
  abstract     = {{Abstract--- This paper follows directly from our forthcoming paper in International Journal of Machine Consciousness, where we discuss the requirements for an artefact to be a moral agent and conclude that the artefactual question is ultimately a red herring. As we did in the earlier paper, we take moral agency to be that condition in which an agent can, appropriately, be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context, and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness: who does the “I” who thinks “I” think that “I” is? It must exhibit a range of highly sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: a “private” moral world is not enough. After reviewing these conditions and pouring cold water on a number of recent claims for having achieved “minimal” machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artefacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots.}},
  author       = {{Parthemore, Joel and Whitby, Blay}},
  booktitle    = {{[Host publication title missing]}},
  editor       = {{Gunkel, David and Bryson, Joanna and Torrance, Steve}},
  keywords     = {{autopoiesis; consciousness; concepts; responsibility; moral stance; moral agency}},
  language     = {{eng}},
  pages        = {{8--17}},
  publisher    = {{The Society for the Study of Artificial Intelligence and Simulation of Behaviour}},
  title        = {{Moral agency, moral responsibility, and artefacts}},
  url          = {{http://events.cs.bham.ac.uk/turing12/proceedings/14.pdf}},
  year         = {{2012}},
}