Contextual Trust
- URL: http://arxiv.org/abs/2303.08900v1
- Date: Wed, 15 Mar 2023 19:34:58 GMT
- Title: Contextual Trust
- Authors: Ryan Othniel Kearns
- Abstract summary: I examine the nature of trust from a philosophical perspective.
I propose to view trust as a context-sensitive state in a manner that will be made precise.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trust is an important aspect of human life. It provides instrumental value in
allowing us to collaborate on and defer actions to others, and intrinsic value
in our intimate relationships with romantic partners, family, and friends. In
this paper I examine the nature of trust from a philosophical perspective.
Specifically I propose to view trust as a context-sensitive state in a manner
that will be made precise. The contribution of this paper is threefold.
First, I make the simple observation that an individual's trust is typically
both action- and context-sensitive. Action-sensitivity means that trust may
obtain between a given truster and trustee for only certain actions.
Context-sensitivity means that trust may obtain between a given truster and
trustee, regarding the same action, in some conditions and not others. I also
opine about what kinds of things may play the role of the truster, trustee, and
action.
Second, I advance a theory for the nature of contextual trust. I propose that
the answer to "What does it mean for $A$ to trust $B$ to do $X$ in context
$C$?" has two conditions. First, $A$ must take $B$'s doing $X$ as a means
towards one of $A$'s ends. Second, $A$ must adopt an unquestioning attitude
concerning $B$'s doing $X$ in context $C$. This unquestioning attitude is
similar to the attitude introduced in Nguyen 2021.
Finally, we explore how contextual trust can help us make sense of trust in
general non-interpersonal settings, notably that of artificial intelligence
(AI) systems. The field of Explainable Artificial Intelligence (XAI) assigns
paramount importance to the problem of user trust in opaque computational
models, yet does little to give trust diagnostic or even conceptual criteria. I
propose that contextual trust is a natural fit for the task by illustrating
that model transparency and explainability map nicely into our construction of
the contexts $C$.
Related papers
- Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - Are we measuring trust correctly in explainability, interpretability,
and transparency research? [4.452019519213712]
This paper showcases three methods that do a good job at measuring perceived and demonstrated trust.
It is intended to be starting point for discussion on this topic, rather than to be the final say.
arXiv Detail & Related papers (2022-08-31T07:41:08Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust in AI: Interpretability is not necessary or sufficient, while
black-box interaction is necessary and sufficient [0.0]
The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning.
We draw from statistical learning theory and sociological lenses on human-automation trust to motivate an AI-as-tool framework.
We clarify the role of interpretability in trust with a ladder of model access.
arXiv Detail & Related papers (2022-02-10T19:59:23Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.