Trust as Extended Control: Active Inference and User Feedback During
Human-Robot Collaboration
- URL: http://arxiv.org/abs/2104.11153v1
- Date: Thu, 22 Apr 2021 16:11:22 GMT
- Title: Trust as Extended Control: Active Inference and User Feedback During
Human-Robot Collaboration
- Authors: Felix Schoeller, Mark Miller, Roy Salomon, Karl J. Friston
- Abstract summary: Despite its crucial role, it is largely unknown how trust emerges, develops, and supports human interactions with nonhuman artefacts.
We introduce a model of trust as an agent's best explanation for reliable sensory exchange with an extended motor plant or partner.
We examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration.
- Score: 2.6381163133447836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To interact seamlessly with robots, users must infer the causes of a robot's
behavior and be confident about that inference. Hence, trust is a necessary
condition for human-robot collaboration (HRC). Despite its crucial role, it is
largely unknown how trust emerges, develops, and supports human interactions
with nonhuman artefacts. Here, we review the literature on trust, human-robot
interaction, human-robot collaboration, and human interaction at large. Early
models of trust suggest that trust entails a trade-off between benevolence and
competence, while studies of human-to-human interaction emphasize the role of
shared behavior and mutual knowledge in the gradual building of trust. We then
introduce a model of trust as an agent's best explanation for reliable sensory
exchange with an extended motor plant or partner. This model is based on the
cognitive neuroscience of active inference and suggests that, in the context of
HRC, trust can be cast in terms of virtual control over an artificial agent. In
this setting, interactive feedback becomes a necessary component of the
trustor's perception-action cycle. The resulting model has important
implications for understanding human-robot interaction and collaboration, as it
allows the traditional determinants of human trust to be defined in terms of
active inference, information exchange and empowerment. Furthermore, this model
suggests that boredom and surprise may be used as markers for under and
over-reliance on the system. Finally, we examine the role of shared behavior in
the genesis of trust, especially in the context of dyadic collaboration,
suggesting important consequences for the acceptability and design of
human-robot collaborative systems.
Related papers
- ReGenNet: Towards Human Action-Reaction Synthesis [87.57721371471536]
We analyze the asymmetric, dynamic, synchronous, and detailed nature of human-human interactions.
We propose the first multi-setting human action-reaction benchmark to generate human reactions conditioned on given human actions.
arXiv Detail & Related papers (2024-03-18T15:33:06Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - "Do it my way!": Impact of Customizations on Trust perceptions in
Human-Robot Collaboration [0.8287206589886881]
Personalization of assistive robots is positively correlated with robot adoption and user perceptions.
Our findings indicate that increased levels of customization was associated with higher trust and comfort perceptions.
arXiv Detail & Related papers (2023-10-28T19:31:40Z) - Rethinking Trust Repair in Human-Robot Interaction [1.52292571922932]
Despite emerging research on trust repair in human-robot interaction, significant questions remain about identifying reliable approaches to restoring trust in robots after trust violations occur.
My research aims to identify effective strategies for designing robots capable of trust repair in human-robot interaction (HRI)
This paper provides an overview of the fundamental concepts and key components of the trust repair process in HRI, as well as a summary of my current published work in this area.
arXiv Detail & Related papers (2023-07-14T13:48:37Z) - Evaluation of Performance-Trust vs Moral-Trust Violation in 3D
Environment [1.4502611532302039]
We aim to design an experiment to investigate the consequences of performance-trust violation and moral-trust violation in a search and rescue scenario.
We want to see if two similar robot failures, one caused by a performance-trust violation and the other by a moral-trust violation have distinct effects on human trust.
arXiv Detail & Related papers (2022-06-30T17:27:09Z) - Trust-Aware Planning: Modeling Trust Evolution in Longitudinal
Human-Robot Interaction [21.884895329834112]
We propose a computational model for capturing and modulating trust in longitudinal human-robot interaction.
In our model, the robot integrates human's trust and their expectations from the robot into its planning process to build and maintain trust over the interaction horizon.
arXiv Detail & Related papers (2021-05-03T23:38:34Z) - Modeling Trust in Human-Robot Interaction: A Survey [1.4502611532302039]
appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction.
For trust calibration in HRI, trust needs to be modeled first.
arXiv Detail & Related papers (2020-11-09T21:56:34Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.