Fairness and Transparency in Recommendation: The Users' Perspective
- URL: http://arxiv.org/abs/2103.08786v1
- Date: Tue, 16 Mar 2021 00:42:09 GMT
- Title: Fairness and Transparency in Recommendation: The Users' Perspective
- Authors: Nasim Sonboli and Jessie J. Smith, Florencia Cabral Berenfus, Robin
Burke, Casey Fiesler
- Abstract summary: We discuss user perspectives of fairness-aware recommender systems.
We propose three features that could improve user understanding of and trust in fairness-aware recommender systems.
- Score: 14.830700792215849
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Though recommender systems are defined by personalization, recent work has
shown the importance of additional, beyond-accuracy objectives, such as
fairness. Because users often expect their recommendations to be purely
personalized, these new algorithmic objectives must be communicated
transparently in a fairness-aware recommender system. While explanation has a
long history in recommender systems research, there has been little work that
attempts to explain systems that use a fairness objective. Even though the
previous work in other branches of AI has explored the use of explanations as a
tool to increase fairness, this work has not been focused on recommendation.
Here, we consider user perspectives of fairness-aware recommender systems and
techniques for enhancing their transparency. We describe the results of an
exploratory interview study that investigates user perceptions of fairness,
recommender systems, and fairness-aware objectives. We propose three features
-- informed by the needs of our participants -- that could improve user
understanding of and trust in fairness-aware recommender systems.
Related papers
- User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Fairness in Recommendation: Foundations, Methods and Applications [38.63520487389138]
This survey focuses on the foundations for fairness in recommendation literature.
It first presents a brief introduction about fairness in basic machine learning tasks such as classification and ranking.
After that, the survey will introduce fairness in recommendation with a focus on the definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation.
arXiv Detail & Related papers (2022-05-26T20:48:53Z) - Towards Personalized Fairness based on Causal Notion [18.5897206797918]
We introduce a framework for achieving counterfactually fair recommendations through adversary learning.
Our method can generate fairer recommendations for users with a desirable recommendation performance.
arXiv Detail & Related papers (2021-05-20T15:24:34Z) - Knowledge Transfer via Pre-training for Recommendation: A Review and
Prospect [89.91745908462417]
We show the benefits of pre-training to recommender systems through experiments.
We discuss several promising directions for future research for recommender systems with pre-training.
arXiv Detail & Related papers (2020-09-19T13:06:27Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z) - Exploring User Opinions of Fairness in Recommender Systems [13.749884072907163]
We ask users what their ideas of fair treatment in recommendation might be.
We analyze what might cause discrepancies or changes between user's opinions towards fairness.
arXiv Detail & Related papers (2020-03-13T19:44:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.