Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Who should obey Asimov's Laws of Robotics? A question of responsibility

Hedlund, Maria LU and Persson, Erik LU orcid (2024) p.9-25
Abstract
The aim of this chapter is to explore the safety value of implementing Asimov's Laws of Robotics as a future general framework that humans should obey. Asimov formulated laws to make explicit the safeguards of the robots in his stories: (1) A robot may not injure or harm a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. In Asimov's stories, it is always assumed that the laws are built into the robots to govern the behaviour of the robots. As his stories clearly demonstrate, the Laws... (More)
The aim of this chapter is to explore the safety value of implementing Asimov's Laws of Robotics as a future general framework that humans should obey. Asimov formulated laws to make explicit the safeguards of the robots in his stories: (1) A robot may not injure or harm a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. In Asimov's stories, it is always assumed that the laws are built into the robots to govern the behaviour of the robots. As his stories clearly demonstrate, the Laws can be ambiguous. Moreover, the laws are not very specific. General rules as a guide for robot behaviour may not be a very good method to achieve robot safety - if we expect the robots to follow them. But would it work for humans? In this chapter, we ask whether it would make as much, or more, sense to implement the laws in human legislation with the purpose of governing the behaviour of people or companies that develop, build, market or use AI, embodied in robots or in the form of software, now and in the future. (Less)
Please use this url to cite or link to this publication:
author
and
organization
alternative title
Vem ska lyda Asimovs robotlagar? En fråga om ansvar
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
The Ethics Gap in the Engineering of the Future : Moral Challenges for the Technology of Tomorrow - Moral Challenges for the Technology of Tomorrow
editor
Stelios, Spyridon and Theologou, Kostas
pages
9 - 25
publisher
Emerald Group Publishing Limited
external identifiers
  • scopus:105018113735
ISBN
9781837976362
9781837976355
DOI
10.1108/978-1-83797-635-520241002
language
English
LU publication?
yes
id
d2351d98-4a12-403e-b084-807c6773989f
date added to LUP
2024-11-26 10:30:56
date last changed
2025-12-09 15:53:35
@inbook{d2351d98-4a12-403e-b084-807c6773989f,
  abstract     = {{The aim of this chapter is to explore the safety value of implementing Asimov's Laws of Robotics as a future general framework that humans should obey. Asimov formulated laws to make explicit the safeguards of the robots in his stories: (1) A robot may not injure or harm a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. In Asimov's stories, it is always assumed that the laws are built into the robots to govern the behaviour of the robots. As his stories clearly demonstrate, the Laws can be ambiguous. Moreover, the laws are not very specific. General rules as a guide for robot behaviour may not be a very good method to achieve robot safety - if we expect the robots to follow them. But would it work for humans? In this chapter, we ask whether it would make as much, or more, sense to implement the laws in human legislation with the purpose of governing the behaviour of people or companies that develop, build, market or use AI, embodied in robots or in the form of software, now and in the future.}},
  author       = {{Hedlund, Maria and Persson, Erik}},
  booktitle    = {{The Ethics Gap in the Engineering of the Future : Moral Challenges for the Technology of Tomorrow}},
  editor       = {{Stelios, Spyridon and Theologou, Kostas}},
  isbn         = {{9781837976362}},
  language     = {{eng}},
  month        = {{11}},
  pages        = {{9--25}},
  publisher    = {{Emerald Group Publishing Limited}},
  title        = {{Who should obey Asimov's Laws of Robotics? A question of responsibility}},
  url          = {{http://dx.doi.org/10.1108/978-1-83797-635-520241002}},
  doi          = {{10.1108/978-1-83797-635-520241002}},
  year         = {{2024}},
}