Trust and Transparency in Recommender Systems
- URL: http://arxiv.org/abs/2304.08094v1
- Date: Mon, 17 Apr 2023 09:09:48 GMT
- Title: Trust and Transparency in Recommender Systems
- Authors: Clara Siepmann and Mohamed Amine Chatti
- Abstract summary: We first go through different understandings and measurements of trust in the AI and RS community, such as demonstrated and perceived trust.
We then review the relationsships between trust and transparency, as well as mental models, and investigate different strategies to achieve transparency in RS.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Trust is long recognized to be an important factor in Recommender Systems
(RS). However, there are different perspectives on trust and different ways to
evaluate it. Moreover, a link between trust and transparency is often assumed
but not always further investigated. In this paper we first go through
different understandings and measurements of trust in the AI and RS community,
such as demonstrated and perceived trust. We then review the relationsships
between trust and transparency, as well as mental models, and investigate
different strategies to achieve transparency in RS such as explanation,
exploration and exploranation (i.e., a combination of exploration and
explanation). We identify a need for further studies to explore these concepts
as well as the relationships between them.
Related papers
- Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - The Many Facets of Trust in AI: Formalizing the Relation Between Trust
and Fairness, Accountability, and Transparency [4.003809001962519]
Efforts to promote fairness, accountability, and transparency are assumed to be critical in fostering Trust in AI (TAI)
The lack of exposition on trust itself suggests that trust is commonly understood, uncomplicated, or even uninteresting.
Our analysis of TAI publications reveals numerous orientations which differ in terms of who is doing the trusting (agent), in what (object), on the basis of what (basis), in order to what (objective), and why (impact)
arXiv Detail & Related papers (2022-08-01T08:26:57Z) - TrustGNN: Graph Neural Network based Trust Evaluation via Learnable
Propagative and Composable Nature [63.78619502896071]
Trust evaluation is critical for many applications such as cyber security, social communication and recommender systems.
We propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs.
Specifically, TrustGNN designs specific propagative patterns for different propagative processes of trust, and distinguishes the contribution of different propagative processes to create new trust.
arXiv Detail & Related papers (2022-05-25T13:57:03Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust and Reliance in XAI -- Distinguishing Between Attitudinal and
Behavioral Measures [0.0]
Researchers argue that AI should be more transparent to increase trust, making transparency one of the main goals of XAI.
empirical research on this topic is inconclusive regarding the effect of transparency on trust.
We advocate for a clear distinction between behavioral (objective) measures of reliance and attitudinal (subjective) measures of trust.
arXiv Detail & Related papers (2022-03-23T10:39:39Z) - Relativistic Conceptions of Trustworthiness: Implications for the
Trustworthy Status of National Identification Systems [1.4728207711693404]
This article outlines a new account of trustworthiness, dubbed the expectation-oriented account.
To be trustworthy, we suggest, is to minimize the error associated with trustor expectations in situations of social dependency.
In addition to outlining the features of the expectation-oriented account, we describe some of the implications of this account for the design, development, and management of trustworthy NISs.
arXiv Detail & Related papers (2021-12-17T18:40:44Z) - More Similar Values, More Trust? -- the Effect of Value Similarity on
Trust in Human-Agent Interaction [6.168444105072466]
This paper studies how human and agent Value Similarity (VS) influences a human's trust in that agent.
In a scenario-based experiment, 89 participants teamed up with five different agents, which were designed with varying levels of value similarity to that of the participants.
Our results show that agents rated as having more similar values also scored higher on trust, indicating a positive effect between the two.
arXiv Detail & Related papers (2021-05-19T16:06:46Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.