Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Testing the Error Recovery Capabilities of Robotic Speech

Krantz, Amandus LU orcid ; Stedtler, Samantha LU ; Balkenius, Christian LU orcid and Fantasia, Valentina LU (2023) The Imperfectly Relatable Robot, HRI'23
Abstract
Trust in Human-Robot Interaction is a widely studied subject, and yet, few studies have examined the ability to speak and how it impacts trust towards a robot. Errors can have a negative impact on perceived trustworthiness of a robot. However, there seem to be mitigating effects, such as using a humanoid robot, which has been shown to be perceived as more trustworthy when having a high error-rate than a more mechanical robot with the same error- rate. We want to use a humanoid robot to test whether speech can increase anthropomorphism and mitigate the effects of errors on trust. For this purpose, we are planning an experiment where participants solve a sequence completion task, with the robot giv- ing suggestions (either verbal or... (More)
Trust in Human-Robot Interaction is a widely studied subject, and yet, few studies have examined the ability to speak and how it impacts trust towards a robot. Errors can have a negative impact on perceived trustworthiness of a robot. However, there seem to be mitigating effects, such as using a humanoid robot, which has been shown to be perceived as more trustworthy when having a high error-rate than a more mechanical robot with the same error- rate. We want to use a humanoid robot to test whether speech can increase anthropomorphism and mitigate the effects of errors on trust. For this purpose, we are planning an experiment where participants solve a sequence completion task, with the robot giv- ing suggestions (either verbal or non-verbal) for the solution. In addition, we want to measure whether the degree of error (slight error vs. severe error) has an impact on the participants’ behaviour and the robot’s perceived trustworthiness, since making a severe error would affect trust more than a slight error. Participants will be assigned to three groups, where we will vary the degree of accu- racy of the robot’s answers (correct vs. almost right vs. obviously wrong). They will complete ten series of a sequence completion task and rate trustworthiness and general perception (Godspeed Questionnaire) of the robot. We also present our thoughts on the implications of potential results. (Less)
Please use this url to cite or link to this publication:
author
; ; and
organization
publishing date
type
Contribution to conference
publication status
in press
subject
pages
4 pages
conference name
The Imperfectly Relatable Robot, HRI'23
conference location
Stockholm, Sweden
conference dates
2023-03-13 - 2023-03-13
project
Ethics for autonomous systems/AI
Non-Verbal Signals of Trust and Group Identification in Humans and Robots
language
English
LU publication?
yes
id
686474ec-3976-4944-a981-94c2c27855b1
date added to LUP
2023-03-14 12:30:49
date last changed
2023-03-24 16:06:18
@misc{686474ec-3976-4944-a981-94c2c27855b1,
  abstract     = {{Trust in Human-Robot Interaction is a widely studied subject, and yet, few studies have examined the ability to speak and how it impacts trust towards a robot. Errors can have a negative impact on perceived trustworthiness of a robot. However, there seem to be mitigating effects, such as using a humanoid robot, which has been shown to be perceived as more trustworthy when having a high error-rate than a more mechanical robot with the same error- rate. We want to use a humanoid robot to test whether speech can increase anthropomorphism and mitigate the effects of errors on trust. For this purpose, we are planning an experiment where participants solve a sequence completion task, with the robot giv- ing suggestions (either verbal or non-verbal) for the solution. In addition, we want to measure whether the degree of error (slight error vs. severe error) has an impact on the participants’ behaviour and the robot’s perceived trustworthiness, since making a severe error would affect trust more than a slight error. Participants will be assigned to three groups, where we will vary the degree of accu- racy of the robot’s answers (correct vs. almost right vs. obviously wrong). They will complete ten series of a sequence completion task and rate trustworthiness and general perception (Godspeed Questionnaire) of the robot. We also present our thoughts on the implications of potential results.}},
  author       = {{Krantz, Amandus and Stedtler, Samantha and Balkenius, Christian and Fantasia, Valentina}},
  language     = {{eng}},
  month        = {{03}},
  title        = {{Testing the Error Recovery Capabilities of Robotic Speech}},
  url          = {{https://lup.lub.lu.se/search/files/140414675/5992.pdf}},
  year         = {{2023}},
}