Who Is Responsible? Social Identity, Robot Errors and Blame Attribution
(2025) In Frontiers in Artificial Intelligence and Applications 397. p.284-297- Abstract
- This paper argues that conventional blame practices fall short of capturing the complexity of moral experiences, neglecting power dynamics and discriminatory social practices. It is evident that robots, embodying roles linked to specific social groups, pose a risk of reinforcing stereotypes of how these groups behave or should behave, so they set a normative and descriptive standard. In addition, we argue that faulty robots might create expectations of who is supposed to compensate and repair after their errors, where social groups that are already disadvantaged might be blamed disproportionately if they do not act according to their ascribed roles. This theoretical and empirical gap becomes even more urgent to address as there have been... (More)
- This paper argues that conventional blame practices fall short of capturing the complexity of moral experiences, neglecting power dynamics and discriminatory social practices. It is evident that robots, embodying roles linked to specific social groups, pose a risk of reinforcing stereotypes of how these groups behave or should behave, so they set a normative and descriptive standard. In addition, we argue that faulty robots might create expectations of who is supposed to compensate and repair after their errors, where social groups that are already disadvantaged might be blamed disproportionately if they do not act according to their ascribed roles. This theoretical and empirical gap becomes even more urgent to address as there have been indications of potential carryover effects from Human-Robot Interactions (HRI) to Human-Human Interactions (HHI). We therefore urge roboticists and designers to stay in an ongoing conversation about how social traits are conceptualised and implemented in this technology. We also argue that one solution could be to ‘embrace the glitch’ and to focus on constructively disrupting practices instead of prioritizing efficiency and smoothness of interaction above everything else. Apart from considering ethical aspects in the design phase of social robots, we see our analysis as a call for more research on the consequences of robot stereotyping and blame attribution. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/47d52922-33dc-45c5-a66a-0d5cc641744e
- author
- Stedtler, Samantha LU and Leventi, Marianna LU
- organization
- publishing date
- 2025-01-05
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- host publication
- Social Robots with AI: Prospects, Risks, and Responsible Methods : Proceedings of Robophilosophy 2024 - Proceedings of Robophilosophy 2024
- series title
- Frontiers in Artificial Intelligence and Applications
- editor
- Seibt, Johanna ; Fazekas, Peter and Santiago Quick, Oliver
- volume
- 397
- pages
- 13 pages
- publisher
- IOS Press
- external identifiers
-
- scopus:105000793891
- ISSN
- 0922-6389
- 1879-8314
- ISBN
- 978-1-64368-567-0
- 978-1-64368-568-7
- DOI
- 10.3233/FAIA241515
- language
- English
- LU publication?
- yes
- id
- 47d52922-33dc-45c5-a66a-0d5cc641744e
- date added to LUP
- 2025-02-05 17:51:19
- date last changed
- 2025-06-10 08:00:12
@inbook{47d52922-33dc-45c5-a66a-0d5cc641744e, abstract = {{This paper argues that conventional blame practices fall short of capturing the complexity of moral experiences, neglecting power dynamics and discriminatory social practices. It is evident that robots, embodying roles linked to specific social groups, pose a risk of reinforcing stereotypes of how these groups behave or should behave, so they set a normative and descriptive standard. In addition, we argue that faulty robots might create expectations of who is supposed to compensate and repair after their errors, where social groups that are already disadvantaged might be blamed disproportionately if they do not act according to their ascribed roles. This theoretical and empirical gap becomes even more urgent to address as there have been indications of potential carryover effects from Human-Robot Interactions (HRI) to Human-Human Interactions (HHI). We therefore urge roboticists and designers to stay in an ongoing conversation about how social traits are conceptualised and implemented in this technology. We also argue that one solution could be to ‘embrace the glitch’ and to focus on constructively disrupting practices instead of prioritizing efficiency and smoothness of interaction above everything else. Apart from considering ethical aspects in the design phase of social robots, we see our analysis as a call for more research on the consequences of robot stereotyping and blame attribution.}}, author = {{Stedtler, Samantha and Leventi, Marianna}}, booktitle = {{Social Robots with AI: Prospects, Risks, and Responsible Methods : Proceedings of Robophilosophy 2024}}, editor = {{Seibt, Johanna and Fazekas, Peter and Santiago Quick, Oliver}}, isbn = {{978-1-64368-567-0}}, issn = {{0922-6389}}, language = {{eng}}, month = {{01}}, pages = {{284--297}}, publisher = {{IOS Press}}, series = {{Frontiers in Artificial Intelligence and Applications}}, title = {{Who Is Responsible? Social Identity, Robot Errors and Blame Attribution}}, url = {{http://dx.doi.org/10.3233/FAIA241515}}, doi = {{10.3233/FAIA241515}}, volume = {{397}}, year = {{2025}}, }