Online certification of preference-based fairness for personalized
recommender systems
- URL: http://arxiv.org/abs/2104.14527v1
- Date: Thu, 29 Apr 2021 17:45:27 GMT
- Title: Online certification of preference-based fairness for personalized
recommender systems
- Authors: Virginie Do, Sam Corbett-Davies, Jamal Atif, Nicolas Usunier
- Abstract summary: We assess the fairness of personalized recommender systems in the sense of envy-freeness.
We propose an auditing algorithm based on pure exploration and conservative constraints in multi-armed bandits.
- Score: 20.875347023588652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose to assess the fairness of personalized recommender systems in the
sense of envy-freeness: every (group of) user(s) should prefer their
recommendations to the recommendations of other (groups of) users. Auditing for
envy-freeness requires probing user preferences to detect potential blind
spots, which may deteriorate recommendation performance. To control the cost of
exploration, we propose an auditing algorithm based on pure exploration and
conservative constraints in multi-armed bandits. We study, both theoretically
and empirically, the trade-offs achieved by this algorithm.
Related papers
- A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns [40.793466500324904]
We view recommender system auditing from a causal lens and provide a general recipe for defining auditing metrics.
Under this general causal auditing framework, we categorize existing auditing metrics and identify gaps in them.
We propose two classes of such metrics:future- and past-reacheability and stability, that measure the ability of a user to influence their own and other users' recommendations.
arXiv Detail & Related papers (2024-09-20T04:37:36Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems [2.0932879442844476]
A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
arXiv Detail & Related papers (2022-05-17T12:36:30Z) - Towards Personalized Fairness based on Causal Notion [18.5897206797918]
We introduce a framework for achieving counterfactually fair recommendations through adversary learning.
Our method can generate fairer recommendations for users with a desirable recommendation performance.
arXiv Detail & Related papers (2021-05-20T15:24:34Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.