Common (good) practices measuring trust in HRI
- URL: http://arxiv.org/abs/2311.12182v1
- Date: Mon, 20 Nov 2023 20:52:10 GMT
- Title: Common (good) practices measuring trust in HRI
- Authors: Patrick Holthaus and Alessandra Rossi
- Abstract summary: Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
- Score: 55.2480439325792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trust in robots is widely believed to be imperative for the adoption of
robots into people's daily lives. It is, therefore, understandable that the
literature of the last few decades focuses on measuring how much people trust
robots -- and more generally, any agent - to foster such trust in these
technologies. Researchers have been exploring how people trust robot in
different ways, such as measuring trust on human-robot interactions (HRI) based
on textual descriptions or images without any physical contact, during and
after interacting with the technology. Nevertheless, trust is a complex
behaviour, and it is affected and depends on several factors, including those
related to the interacting agents (e.g. humans, robots, pets), itself (e.g.
capabilities, reliability), the context (e.g. task), and the environment (e.g.
public spaces vs private spaces vs working spaces). In general, most
roboticists agree that insufficient levels of trust lead to a risk of
disengagement while over-trust in technology can cause over-reliance and
inherit dangers, for example, in emergency situations. It is, therefore, very
important that the research community has access to reliable methods to measure
people's trust in robots and technology. In this position paper, we outline
current methods and their strengths, identify (some) weakly covered aspects and
discuss the potential for covering a more comprehensive amount of factors
influencing trust in HRI.
Related papers
- Rethinking Trust Repair in Human-Robot Interaction [1.52292571922932]
Despite emerging research on trust repair in human-robot interaction, significant questions remain about identifying reliable approaches to restoring trust in robots after trust violations occur.
My research aims to identify effective strategies for designing robots capable of trust repair in human-robot interaction (HRI)
This paper provides an overview of the fundamental concepts and key components of the trust repair process in HRI, as well as a summary of my current published work in this area.
arXiv Detail & Related papers (2023-07-14T13:48:37Z) - The dynamic nature of trust: Trust in Human-Robot Interaction revisited [0.38233569758620045]
Socially assistive robots (SARs) assist humans in the real world.
Risk introduces an element of trust, so understanding human trust in the robot is imperative.
arXiv Detail & Related papers (2023-03-08T19:20:11Z) - Evaluation of Performance-Trust vs Moral-Trust Violation in 3D
Environment [1.4502611532302039]
We aim to design an experiment to investigate the consequences of performance-trust violation and moral-trust violation in a search and rescue scenario.
We want to see if two similar robot failures, one caused by a performance-trust violation and the other by a moral-trust violation have distinct effects on human trust.
arXiv Detail & Related papers (2022-06-30T17:27:09Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [64.77385130665128]
The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots.
We particularly focus on AI techniques required to implement autonomous and proactive interactions.
arXiv Detail & Related papers (2022-06-07T11:12:45Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - A Review on Trust in Human-Robot Interaction [0.0]
A new field of research in human-robot interaction, namely human-robot trust, is emerging.
This paper reviews the past works on human-robot trust based on the research topics and discuss selected trends in this field.
arXiv Detail & Related papers (2021-05-20T21:50:03Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [65.60507052509406]
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter-and multi-disciplinary nature of the relationships between people and robots.
arXiv Detail & Related papers (2021-03-23T16:52:12Z) - Modeling Trust in Human-Robot Interaction: A Survey [1.4502611532302039]
appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction.
For trust calibration in HRI, trust needs to be modeled first.
arXiv Detail & Related papers (2020-11-09T21:56:34Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.