Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Distribution of responsibility for AI development: Expert views

Hedlund, Maria LU and Persson, Erik LU orcid (2025) In AI & Society: Knowledge, Culture and Communication
Abstract (Swedish)
The purpose of this paper is to increase the understanding of how different types of experts with influence over the development of AI, in this role, reflect upon distribution of forward-looking responsibility for AI development with regard to safety
and democracy. Forward-looking responsibility refers to the obligation to see to it that a particular state of affairs materialise. In the context of AI, actors somehow involved in AI development have the potential to guide AI development in a safe and
democratic direction. This study is based on qualitative interviews with such actors in different roles at research institutions, private companies, think tanks, consultancy agencies, parliaments, and non-governmental organisations.... (More)
The purpose of this paper is to increase the understanding of how different types of experts with influence over the development of AI, in this role, reflect upon distribution of forward-looking responsibility for AI development with regard to safety
and democracy. Forward-looking responsibility refers to the obligation to see to it that a particular state of affairs materialise. In the context of AI, actors somehow involved in AI development have the potential to guide AI development in a safe and
democratic direction. This study is based on qualitative interviews with such actors in different roles at research institutions, private companies, think tanks, consultancy agencies, parliaments, and non-governmental organisations. While the reflections
about distribution of responsibility differ among the respondents, one observation is that influence is seen as an important basis for distribution of responsibility. Another observation is that several respondents think of responsibility in terms of what it would entail in concrete measures. By showing how actors involved in AI development reflect on distribution of responsibility, this study contributes to a dialogue between the field of AI governance and the field of AI ethics. (Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
artificial intelligence (AI, moral responsibility, forward-looking responsibility, democracy, AI experts, qualitative interviews
in
AI & Society: Knowledge, Culture and Communication
publisher
Springer
external identifiers
  • scopus:85217261679
ISSN
1435-5655
DOI
10.1007/s00146-024-02167-9
language
English
LU publication?
yes
id
102a83ec-347f-4385-8046-f4bdc6192d1f
date added to LUP
2025-01-13 10:56:47
date last changed
2025-04-04 15:00:24
@article{102a83ec-347f-4385-8046-f4bdc6192d1f,
  abstract     = {{The purpose of this paper is to increase the understanding of how different types of experts with influence over the development of AI, in this role, reflect upon distribution of forward-looking responsibility for AI development with regard to safety<br/>and democracy. Forward-looking responsibility refers to the obligation to see to it that a particular state of affairs materialise. In the context of AI, actors somehow involved in AI development have the potential to guide AI development in a safe and<br/>democratic direction. This study is based on qualitative interviews with such actors in different roles at research institutions, private companies, think tanks, consultancy agencies, parliaments, and non-governmental organisations. While the reflections<br/>about distribution of responsibility differ among the respondents, one observation is that influence is seen as an important basis for distribution of responsibility. Another observation is that several respondents think of responsibility in terms of what it would entail in concrete measures. By showing how actors involved in AI development reflect on distribution of responsibility, this study contributes to a dialogue between the field of AI governance and the field of AI ethics.}},
  author       = {{Hedlund, Maria and Persson, Erik}},
  issn         = {{1435-5655}},
  keywords     = {{artificial intelligence (AI; moral responsibility; forward-looking responsibility; democracy; AI experts; qualitative interviews}},
  language     = {{eng}},
  month        = {{01}},
  publisher    = {{Springer}},
  series       = {{AI & Society: Knowledge, Culture and Communication}},
  title        = {{Distribution of responsibility for AI development: Expert views}},
  url          = {{http://dx.doi.org/10.1007/s00146-024-02167-9}},
  doi          = {{10.1007/s00146-024-02167-9}},
  year         = {{2025}},
}