Moral-Trust Violation vs Performance-Trust Violation by a Robot: Which
Hurts More?
- URL: http://arxiv.org/abs/2110.04418v1
- Date: Sat, 9 Oct 2021 00:32:18 GMT
- Title: Moral-Trust Violation vs Performance-Trust Violation by a Robot: Which
Hurts More?
- Authors: Zahra Rezaei Khavas, Russell Perkins, S.Reza Ahmadzadeh, Paul
Robinette
- Abstract summary: We study the effects of performance-trust violation and moral-trust violation separately in a search and rescue task.
We want to see whether two failures of a robot with equal magnitudes would affect human trust differently if one failure is due to a performance-trust violation and the other is a moral-trust violation.
- Score: 0.7373617024876725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years a modern conceptualization of trust in human-robot
interaction (HRI) was introduced by Ullman et al.\cite{ullman2018does}. This
new conceptualization of trust suggested that trust between humans and robots
is multidimensional, incorporating both performance aspects (i.e., similar to
the trust in human-automation interaction) and moral aspects (i.e., similar to
the trust in human-human interaction). But how does a robot violating each of
these different aspects of trust affect human trust in a robot? How does trust
in robots change when a robot commits a moral-trust violation compared to a
performance-trust violation? And whether physiological signals have the
potential to be used for assessing gain/loss of each of these two trust aspects
in a human. We aim to design an experiment to study the effects of
performance-trust violation and moral-trust violation separately in a search
and rescue task. We want to see whether two failures of a robot with equal
magnitudes would affect human trust differently if one failure is due to a
performance-trust violation and the other is a moral-trust violation.
Related papers
- Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - ToP-ToM: Trust-aware Robot Policy with Theory of Mind [3.4850414292716327]
Theory of Mind (ToM) is a cognitive architecture that endows humans with the ability to attribute mental states to others.
This paper investigates trust-aware robot policy with the theory of mind in a multiagent setting.
arXiv Detail & Related papers (2023-11-07T23:55:56Z) - "Do it my way!": Impact of Customizations on Trust perceptions in
Human-Robot Collaboration [0.8287206589886881]
Personalization of assistive robots is positively correlated with robot adoption and user perceptions.
Our findings indicate that increased levels of customization was associated with higher trust and comfort perceptions.
arXiv Detail & Related papers (2023-10-28T19:31:40Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - The dynamic nature of trust: Trust in Human-Robot Interaction revisited [0.38233569758620045]
Socially assistive robots (SARs) assist humans in the real world.
Risk introduces an element of trust, so understanding human trust in the robot is imperative.
arXiv Detail & Related papers (2023-03-08T19:20:11Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Evaluation of Performance-Trust vs Moral-Trust Violation in 3D
Environment [1.4502611532302039]
We aim to design an experiment to investigate the consequences of performance-trust violation and moral-trust violation in a search and rescue scenario.
We want to see if two similar robot failures, one caused by a performance-trust violation and the other by a moral-trust violation have distinct effects on human trust.
arXiv Detail & Related papers (2022-06-30T17:27:09Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [65.60507052509406]
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter-and multi-disciplinary nature of the relationships between people and robots.
arXiv Detail & Related papers (2021-03-23T16:52:12Z) - Modeling Trust in Human-Robot Interaction: A Survey [1.4502611532302039]
appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction.
For trust calibration in HRI, trust needs to be modeled first.
arXiv Detail & Related papers (2020-11-09T21:56:34Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.