Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

The Trolley Problem and Isaac Asimov's First Law of Robotics

Hedlund, Maria LU and Persson, Erik LU orcid (2024) In Journal of Science Fiction and Philosophy 7.
Abstract
How to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. Asimov’s laws and the Trolley Problem are usually discussed separately but there is a connection in that the Trolley Problem poses a seemingly unsolvable problem for Asimov’s First Law, that states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. That is, it contains an active and a passive clause and obliges the robot to obey both, while the Trolley Problem forces us to choose between these two options. The object of this paper is to investigate if and how Asimov’s First... (More)
How to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. Asimov’s laws and the Trolley Problem are usually discussed separately but there is a connection in that the Trolley Problem poses a seemingly unsolvable problem for Asimov’s First Law, that states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. That is, it contains an active and a passive clause and obliges the robot to obey both, while the Trolley Problem forces us to choose between these two options. The object of this paper is to investigate if and how Asimov’s First Law of Robotics can handle a situation where we are forced to choose between the active and the passive clauses of the law. We discuss four possible solutions to the challenge explicitly or implicitly used by Asimov. We conclude that all four suggestions would solve the problem but in different ways and with different implications for other dilemmas in robot ethics. We also conclude that considering the urgency of finding ways to secure a safe coexistence between humans and robots, we should not let the Trolley Problem stand in the way of using the First Law of robotics for this purpose. If we want to use Asimov’s laws for this purpose, we also recommend discarding the active clause of the First Law. (Less)
Abstract (Swedish)
How to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. We find it curious that a set of fictional laws can have such a strong impact on discussions about a real-world problem and we think this needs to be looked into.

The probably most common phrase in connection with robotic and AI ethics, second to “The Three Laws of Robotics”, is “The Trolley Problem”. Asimov’s laws and the Trolley Problem are usually discussed separately but there is a connection in that the Trolley Problem poses a seemingly unsolvable problem for Asimov’s First Law, that states: A robot may... (More)
How to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. We find it curious that a set of fictional laws can have such a strong impact on discussions about a real-world problem and we think this needs to be looked into.

The probably most common phrase in connection with robotic and AI ethics, second to “The Three Laws of Robotics”, is “The Trolley Problem”. Asimov’s laws and the Trolley Problem are usually discussed separately but there is a connection in that the Trolley Problem poses a seemingly unsolvable problem for Asimov’s First Law, that states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. That is, it contains an active and a passive clause and obliges the robot to obey both, while the Trolley Problem forces us to choose between these two options.

The object of this paper is therefore to investigate if and how Asimov’s First Law of Robotics can handle a situation where we are forced to choose between the active and the passive clauses of the law. We discuss four possible solutions to the challenge explicitly or implicitly used by Asimov. We conclude that all four suggestions would solve the problem but in different ways and with different implications for other dilemmas in robot ethics. We also conclude that considering the urgency of finding ways to secure a safe coexistence between humans and robots, we should not let the Trolley Problem stand in the way of using the First Law of robotics for this purpose. If we want to use Asimov’s laws for this purpose, we also recommend discarding the active clause of the First Law. (Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Contribution to journal
publication status
published
subject
in
Journal of Science Fiction and Philosophy
volume
7
ISSN
2573-881X
language
English
LU publication?
yes
id
38cd9fba-0547-4292-b8c8-04e3ea5be516
alternative location
https://jsfphil.org/volume-7-2024-androids-vs-robots/asimovs-first-law-and-the-trolley-problem/
date added to LUP
2024-07-03 18:27:42
date last changed
2024-07-04 08:53:16
@article{38cd9fba-0547-4292-b8c8-04e3ea5be516,
  abstract     = {{How to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. Asimov’s laws and the Trolley Problem are usually discussed separately but there is a connection in that the Trolley Problem poses a seemingly unsolvable problem for Asimov’s First Law, that states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. That is, it contains an active and a passive clause and obliges the robot to obey both, while the Trolley Problem forces us to choose between these two options. The object of this paper is to investigate if and how Asimov’s First Law of Robotics can handle a situation where we are forced to choose between the active and the passive clauses of the law. We discuss four possible solutions to the challenge explicitly or implicitly used by Asimov. We conclude that all four suggestions would solve the problem but in different ways and with different implications for other dilemmas in robot ethics. We also conclude that considering the urgency of finding ways to secure a safe coexistence between humans and robots, we should not let the Trolley Problem stand in the way of using the First Law of robotics for this purpose. If we want to use Asimov’s laws for this purpose, we also recommend discarding the active clause of the First Law.}},
  author       = {{Hedlund, Maria and Persson, Erik}},
  issn         = {{2573-881X}},
  language     = {{eng}},
  month        = {{07}},
  series       = {{Journal of Science Fiction and Philosophy}},
  title        = {{The Trolley Problem and Isaac Asimov's First Law of Robotics}},
  url          = {{https://jsfphil.org/volume-7-2024-androids-vs-robots/asimovs-first-law-and-the-trolley-problem/}},
  volume       = {{7}},
  year         = {{2024}},
}