Modeling Trust in Human-Robot Interaction: A Survey
- URL: http://arxiv.org/abs/2011.04796v1
- Date: Mon, 9 Nov 2020 21:56:34 GMT
- Title: Modeling Trust in Human-Robot Interaction: A Survey
- Authors: Zahra Rezaei Khavas, Reza Ahmadzadeh, Paul Robinette
- Abstract summary: appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction.
For trust calibration in HRI, trust needs to be modeled first.
- Score: 1.4502611532302039
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As the autonomy and capabilities of robotic systems increase, they are
expected to play the role of teammates rather than tools and interact with
human collaborators in a more realistic manner, creating a more human-like
relationship. Given the impact of trust observed in human-robot interaction
(HRI), appropriate trust in robotic collaborators is one of the leading factors
influencing the performance of human-robot interaction. Team performance can be
diminished if people do not trust robots appropriately by disusing or misusing
them based on limited experience. Therefore, trust in HRI needs to be
calibrated properly, rather than maximized, to let the formation of an
appropriate level of trust in human collaborators. For trust calibration in
HRI, trust needs to be modeled first. There are many reviews on factors
affecting trust in HRI, however, as there are no reviews concentrated on
different trust models, in this paper, we review different techniques and
methods for trust modeling in HRI. We also present a list of potential
directions for further research and some challenges that need to be addressed
in future work on human-robot trust modeling.
Related papers
- Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - "Do it my way!": Impact of Customizations on Trust perceptions in
Human-Robot Collaboration [0.8287206589886881]
Personalization of assistive robots is positively correlated with robot adoption and user perceptions.
Our findings indicate that increased levels of customization was associated with higher trust and comfort perceptions.
arXiv Detail & Related papers (2023-10-28T19:31:40Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Rethinking Trust Repair in Human-Robot Interaction [1.52292571922932]
Despite emerging research on trust repair in human-robot interaction, significant questions remain about identifying reliable approaches to restoring trust in robots after trust violations occur.
My research aims to identify effective strategies for designing robots capable of trust repair in human-robot interaction (HRI)
This paper provides an overview of the fundamental concepts and key components of the trust repair process in HRI, as well as a summary of my current published work in this area.
arXiv Detail & Related papers (2023-07-14T13:48:37Z) - Evaluation of Performance-Trust vs Moral-Trust Violation in 3D
Environment [1.4502611532302039]
We aim to design an experiment to investigate the consequences of performance-trust violation and moral-trust violation in a search and rescue scenario.
We want to see if two similar robot failures, one caused by a performance-trust violation and the other by a moral-trust violation have distinct effects on human trust.
arXiv Detail & Related papers (2022-06-30T17:27:09Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [64.77385130665128]
The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots.
We particularly focus on AI techniques required to implement autonomous and proactive interactions.
arXiv Detail & Related papers (2022-06-07T11:12:45Z) - Trust as Extended Control: Active Inference and User Feedback During
Human-Robot Collaboration [2.6381163133447836]
Despite its crucial role, it is largely unknown how trust emerges, develops, and supports human interactions with nonhuman artefacts.
We introduce a model of trust as an agent's best explanation for reliable sensory exchange with an extended motor plant or partner.
We examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration.
arXiv Detail & Related papers (2021-04-22T16:11:22Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [65.60507052509406]
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter-and multi-disciplinary nature of the relationships between people and robots.
arXiv Detail & Related papers (2021-03-23T16:52:12Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.