Why should I not follow you? Reasons For and Reasons Against in
Responsible Recommender Systems
- URL: http://arxiv.org/abs/2009.01953v2
- Date: Tue, 8 Sep 2020 15:15:41 GMT
- Title: Why should I not follow you? Reasons For and Reasons Against in
Responsible Recommender Systems
- Authors: Gustavo Padilha Polleti, Douglas Luan de Souza, Fabio Cozman
- Abstract summary: We argue that an RS can better enhance overall trust and transparency by frankly displaying both kinds of reasons to users.
We have developed such an RS by exploiting knowledge graphs and by applying Snedegar's theory of practical reasoning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A few Recommender Systems (RS) resort to explanations so as to enhance trust
in recommendations. However, current techniques for explanation generation tend
to strongly uphold the recommended products instead of presenting both reasons
for and reasons against them. We argue that an RS can better enhance overall
trust and transparency by frankly displaying both kinds of reasons to users.We
have developed such an RS by exploiting knowledge graphs and by applying
Snedegar's theory of practical reasoning. We show that our implemented RS has
excellent performance and we report on an experiment with human subjects that
shows the value of presenting both reasons for and against, with significant
improvements in trust, engagement, and persuasion.
Related papers
- Less is More: Towards Sustainability-Aware Persuasive Explanations in Recommender Systems [42.296965577732045]
We discuss the concept of "sustainability-aware persuasive explanations"
Based on a user study in three item domains, we analyze the potential impacts of sustainability-aware persuasive explanations.
arXiv Detail & Related papers (2024-09-27T12:24:10Z) - Beyond Persuasion: Towards Conversational Recommender System with Credible Explanations [63.05026345443155]
We propose a simple yet effective method, called PC-CRS, to enhance the credibility of CRS's explanations during persuasion.
Experimental results demonstrate the efficacy of PC-CRS in promoting persuasive and credible explanations.
Further analysis reveals the reason behind current methods producing incredible explanations and the potential of credible explanations to improve recommendation accuracy.
arXiv Detail & Related papers (2024-09-22T11:35:59Z) - Stability of Explainable Recommendation [10.186029242664931]
We study the vulnerability of existent feature-oriented explainable recommenders.
We observe that all the explainable models are vulnerable to increased noise levels.
Our study presents an empirical verification on the topic of robust explanations in recommender systems.
arXiv Detail & Related papers (2024-05-03T04:44:51Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - Justification vs. Transparency: Why and How Visual Explanations in a
Scientific Literature Recommender System [0.0]
We identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency.
Our study shows that the choice of the explanation intelligibility types depends on the explanation goal and user type.
arXiv Detail & Related papers (2023-05-26T15:40:46Z) - Measuring "Why" in Recommender Systems: a Comprehensive Survey on the
Evaluation of Explainable Recommendation [87.82664566721917]
This survey is based on more than 100 papers from top-tier conferences like IJCAI, AAAI, TheWebConf, Recsys, UMAP, and IUI.
arXiv Detail & Related papers (2022-02-14T02:58:55Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Interacting with Explanations through Critiquing [40.69540222716043]
We present a technique that learns to generate personalized explanations of recommendations from review texts.
We show that human users significantly prefer these explanations over those produced by state-of-the-art techniques.
Our work's most important innovation is that it allows users to react to a recommendation by critiquing the textual explanation.
arXiv Detail & Related papers (2020-05-22T09:03:06Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.