Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

The Use and Abuse of Normative Ethics for Moral Machines

Stenseke, Jakob LU (2023) In Frontiers in Artificial Intelligence and Applications 366. p.155-164
Abstract
How do we develop artificial intelligence (AI) systems that adhere to the norms and values of our human practices? Is it a promising idea to develop systems based on the principles of normative frameworks such as consequentialism, deontology, or virtue ethics? According to many researchers in machine ethics – a subfield exploring the prospects of constructing moral machines – the answer is yes. In this paper, I challenge this methodological strategy by exploring the difference between normative ethics – its use and abuse – in human practices and in the context of machines. First, I discuss the purpose of normative theory in human contexts; its main strengths and drawbacks. I then describe several moral resources central to the success of... (More)
How do we develop artificial intelligence (AI) systems that adhere to the norms and values of our human practices? Is it a promising idea to develop systems based on the principles of normative frameworks such as consequentialism, deontology, or virtue ethics? According to many researchers in machine ethics – a subfield exploring the prospects of constructing moral machines – the answer is yes. In this paper, I challenge this methodological strategy by exploring the difference between normative ethics – its use and abuse – in human practices and in the context of machines. First, I discuss the purpose of normative theory in human contexts; its main strengths and drawbacks. I then describe several moral resources central to the success of normative ethics in human practices. I argue that machines, currently and in the foreseeable future, lack the resources needed to justify the very use of normative theory. Instead, I propose that machine ethicists should pay closer attention to the multifaceted ways normativity serves and functions in human practices, and how artificial systems can be designed and deployed to foster the moral resources that allow such practices to prosper. (Less)
Abstract (Swedish)
How do we develop artificial intelligence (AI) systems that adhere to the norms and values of our human practices? Is it a promising idea to develop systems based on the principles of normative frameworks such as consequentialism, deontology, or virtue ethics? According to many researchers in machine ethics – a subfield exploring the prospects of constructing moral machines – the answer is yes. In this paper, I challenge this methodological strategy by exploring the difference between normative ethics – its use and abuse – in human practices and in the context of machines. First, I discuss the purpose of normative theory in human contexts; its main strengths and drawbacks. I then describe several moral resources central to the success of... (More)
How do we develop artificial intelligence (AI) systems that adhere to the norms and values of our human practices? Is it a promising idea to develop systems based on the principles of normative frameworks such as consequentialism, deontology, or virtue ethics? According to many researchers in machine ethics – a subfield exploring the prospects of constructing moral machines – the answer is yes. In this paper, I challenge this methodological strategy by exploring the difference between normative ethics – its use and abuse – in human practices and in the context of machines. First, I discuss the purpose of normative theory in human contexts; its main strengths and drawbacks. I then describe several moral resources central to the success of normative ethics in human practices. I argue that machines, currently and in the foreseeable future, lack the resources needed to justify the very use of normative theory. Instead, I propose that machine ethicists should pay closer attention to the multifaceted ways normativity serves and functions in human practices, and how artificial systems can be designed and deployed to foster the moral resources that allow such practices to prosper. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
Machine ethics, moral machines, artificial moral agents, normative ethics, AI ethics, consequentialism, deontology, virtue ethics
host publication
Social Robots in Social Institutions
series title
Frontiers in Artificial Intelligence and Applications
editor
Hakli, Raul ; Mäkelä, Pekka and Seibt, Johanna
volume
366
pages
10 pages
external identifiers
  • scopus:85148594288
ISBN
978-1-64368-374-4
978-1-64368-375-1
DOI
10.3233/FAIA220614
language
English
LU publication?
yes
id
23035033-32f8-4b0c-99fe-db387f89eebd
date added to LUP
2023-01-23 10:34:32
date last changed
2024-06-15 01:18:53
@inproceedings{23035033-32f8-4b0c-99fe-db387f89eebd,
  abstract     = {{How do we develop artificial intelligence (AI) systems that adhere to the norms and values of our human practices? Is it a promising idea to develop systems based on the principles of normative frameworks such as consequentialism, deontology, or virtue ethics? According to many researchers in machine ethics – a subfield exploring the prospects of constructing moral machines – the answer is yes. In this paper, I challenge this methodological strategy by exploring the difference between normative ethics – its use and abuse – in human practices and in the context of machines. First, I discuss the purpose of normative theory in human contexts; its main strengths and drawbacks. I then describe several moral resources central to the success of normative ethics in human practices. I argue that machines, currently and in the foreseeable future, lack the resources needed to justify the very use of normative theory. Instead, I propose that machine ethicists should pay closer attention to the multifaceted ways normativity serves and functions in human practices, and how artificial systems can be designed and deployed to foster the moral resources that allow such practices to prosper.}},
  author       = {{Stenseke, Jakob}},
  booktitle    = {{Social Robots in Social Institutions}},
  editor       = {{Hakli, Raul and Mäkelä, Pekka and Seibt, Johanna}},
  isbn         = {{978-1-64368-374-4}},
  keywords     = {{Machine ethics; moral machines; artificial moral agents; normative ethics; AI ethics; consequentialism; deontology; virtue ethics}},
  language     = {{eng}},
  pages        = {{155--164}},
  series       = {{Frontiers in Artificial Intelligence and Applications}},
  title        = {{The Use and Abuse of Normative Ethics for Moral Machines}},
  url          = {{http://dx.doi.org/10.3233/FAIA220614}},
  doi          = {{10.3233/FAIA220614}},
  volume       = {{366}},
  year         = {{2023}},
}