Human-centered trust framework: An HCI perspective
- URL: http://arxiv.org/abs/2305.03306v2
- Date: Mon, 15 May 2023 06:12:11 GMT
- Title: Human-centered trust framework: An HCI perspective
- Authors: Sonia Sousa, Jose Cravino, Paulo Martins, David Lamas
- Abstract summary: The rationale of this work is based on the current user trust discourse of Artificial Intelligence (AI)
We propose a framework to guide non-experts to unlock the full potential of user trust in AI design.
- Score: 1.6344851071810074
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rationale of this work is based on the current user trust discourse of
Artificial Intelligence (AI). We aim to produce novel HCI approaches that use
trust as a facilitator for the uptake (or appropriation) of current
technologies. We propose a framework (HCTFrame) to guide non-experts to unlock
the full potential of user trust in AI design. Results derived from a data
triangulation of findings from three literature reviews demystify some
misconceptions of user trust in computer science and AI discourse, and three
case studies are conducted to assess the effectiveness of a psychometric scale
in mapping potential users' trust breakdowns and concerns. This work primarily
contributes to the fight against the tendency to design technical-centered
vulnerable interactions, which can eventually lead to additional real and
perceived breaches of trust. The proposed framework can be used to guide system
designers on how to map and define user trust and the socioethical and
organisational needs and characteristics of AI system design. It can also guide
AI system designers on how to develop a prototype and operationalise a solution
that meets user trust requirements. The article ends by providing some user
research tools that can be employed to measure users' trust intentions and
behaviours towards a proposed solution.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - The impact of labeling automotive AI as "trustworthy" or "reliable" on user evaluation and technology acceptance [0.0]
This study explores whether labeling AI as "trustworthy" or "reliable" influences user perceptions and acceptance of automotive AI technologies.
Using a one-way between-subjects design, the research involved 478 online participants who were presented with guidelines for either trustworthy or reliable AI.
Although labeling AI as "trustworthy" did not significantly influence judgments on specific scenarios, it increased perceived ease of use and human-like trust, particularly benevolence.
arXiv Detail & Related papers (2024-08-20T14:48:24Z) - PADTHAI-MM: A Principled Approach for Designing Trustable,
Human-centered AI systems using the MAST Methodology [5.38932801848643]
The Multisource AI Scorecard Table (MAST), a checklist rating system, addresses this gap in designing and evaluating AI-enabled decision support systems.
We propose the Principled Approach for Designing Trustable Human-centered AI systems using MAST methodology.
We show that MAST-guided design can improve trust perceptions, and that MAST criteria can be linked to performance, process, and purpose information.
arXiv Detail & Related papers (2024-01-24T23:15:44Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Investigating and Designing for Trust in AI-powered Code Generation Tools [15.155301866886647]
We interviewed developers to understand their challenges in building appropriate trust in AI code generation tools.
We conducted a design probe study to explore design concepts that support developers' trust-building process.
These findings inform our proposed design recommendations on how to design for trust in AI-powered code generation tools.
arXiv Detail & Related papers (2023-05-18T18:23:51Z) - A Systematic Literature Review of User Trust in AI-Enabled Systems: An
HCI Perspective [0.0]
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption.
This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies.
arXiv Detail & Related papers (2023-04-18T07:58:09Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.