Designing for Responsible Trust in AI Systems: A Communication
Perspective
- URL: http://arxiv.org/abs/2204.13828v1
- Date: Fri, 29 Apr 2022 00:14:33 GMT
- Title: Designing for Responsible Trust in AI Systems: A Communication
Perspective
- Authors: Q. Vera Liao and S. Shyam Sundar
- Abstract summary: We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
- Score: 56.80107647520364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current literature and public discourse on "trust in AI" are often focused on
the principles underlying trustworthy AI, with insufficient attention paid to
how people develop trust. Given that AI systems differ in their level of
trustworthiness, two open questions come to the fore: how should AI
trustworthiness be responsibly communicated to ensure appropriate and equitable
trust judgments by different users, and how can we protect users from deceptive
attempts to earn their trust? We draw from communication theories and
literature on trust in technologies to develop a conceptual model called MATCH,
which describes how trustworthiness is communicated in AI systems through
trustworthiness cues and how those cues are processed by people to make trust
judgments. Besides AI-generated content, we highlight transparency and
interaction as AI systems' affordances that present a wide range of
trustworthiness cues to users. By bringing to light the variety of users'
cognitive processes to make trust judgments and their potential limitations, we
urge technology creators to make conscious decisions in choosing reliable
trustworthiness cues for target users and, as an industry, to regulate this
space and prevent malicious use. Towards these goals, we define the concepts of
warranted trustworthiness cues and expensive trustworthiness cues, and propose
a checklist of requirements to help technology creators identify appropriate
cues to use. We present a hypothetical use case to illustrate how practitioners
can use MATCH to design AI systems responsibly, and discuss future directions
for research and industry efforts aimed at promoting responsible trust in AI.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - Human-centered trust framework: An HCI perspective [1.6344851071810074]
The rationale of this work is based on the current user trust discourse of Artificial Intelligence (AI)
We propose a framework to guide non-experts to unlock the full potential of user trust in AI design.
arXiv Detail & Related papers (2023-05-05T06:15:32Z) - A Systematic Literature Review of User Trust in AI-Enabled Systems: An
HCI Perspective [0.0]
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption.
This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies.
arXiv Detail & Related papers (2023-04-18T07:58:09Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.