Towards a Participatory and Social Justice-Oriented Measure of
Human-Robot Trust
- URL: http://arxiv.org/abs/2402.15671v1
- Date: Sat, 24 Feb 2024 01:04:19 GMT
- Title: Towards a Participatory and Social Justice-Oriented Measure of
Human-Robot Trust
- Authors: Raj Korpan
- Abstract summary: This paper proposes a participatory and social justice-oriented approach for the design and evaluation of a trust measure.
The process would prioritize that community's needs and unique circumstances to produce a trust measure that accurately reflects the factors that impact their trust in a robot.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Many measures of human-robot trust have proliferated across the HRI research
literature because each attempts to capture the factors that impact trust
despite its many dimensions. None of the previous trust measures, however,
address the systems of inequity and structures of power present in HRI research
or attempt to counteract the systematic biases and potential harms caused by
HRI systems. This position paper proposes a participatory and social
justice-oriented approach for the design and evaluation of a trust measure.
This proposed process would iteratively co-design the trust measure with the
community for whom the HRI system is being created. The process would
prioritize that community's needs and unique circumstances to produce a trust
measure that accurately reflects the factors that impact their trust in a
robot.
Related papers
- A Vision to Enhance Trust Requirements for Peer Support Systems by Revisiting Trust Theories [3.4971302832462476]
This vision paper focuses on the mental health crisis impacting healthcare workers (HCWs)
The study proposes a novel approach to elicit perceptual trust requirements by proposing a trust framework anchored in recognized trust theories.
arXiv Detail & Related papers (2024-06-06T02:21:11Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Rethinking Trust Repair in Human-Robot Interaction [1.52292571922932]
Despite emerging research on trust repair in human-robot interaction, significant questions remain about identifying reliable approaches to restoring trust in robots after trust violations occur.
My research aims to identify effective strategies for designing robots capable of trust repair in human-robot interaction (HRI)
This paper provides an overview of the fundamental concepts and key components of the trust repair process in HRI, as well as a summary of my current published work in this area.
arXiv Detail & Related papers (2023-07-14T13:48:37Z) - Trustworthy Social Bias Measurement [92.87080873893618]
In this work, we design bias measures that warrant trust based on the cross-disciplinary theory of measurement modeling.
We operationalize our definition by proposing a general bias measurement framework DivDist, which we use to instantiate 5 concrete bias measures.
We demonstrate considerable evidence to trust our measures, showing they overcome conceptual, technical, and empirical deficiencies present in prior measures.
arXiv Detail & Related papers (2022-12-20T18:45:12Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust as Extended Control: Active Inference and User Feedback During
Human-Robot Collaboration [2.6381163133447836]
Despite its crucial role, it is largely unknown how trust emerges, develops, and supports human interactions with nonhuman artefacts.
We introduce a model of trust as an agent's best explanation for reliable sensory exchange with an extended motor plant or partner.
We examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration.
arXiv Detail & Related papers (2021-04-22T16:11:22Z) - Modeling Trust in Human-Robot Interaction: A Survey [1.4502611532302039]
appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction.
For trust calibration in HRI, trust needs to be modeled first.
arXiv Detail & Related papers (2020-11-09T21:56:34Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.