Towards Personalized Fairness based on Causal Notion
- URL: http://arxiv.org/abs/2105.09829v1
- Date: Thu, 20 May 2021 15:24:34 GMT
- Title: Towards Personalized Fairness based on Causal Notion
- Authors: Yunqi Li, Hanxiong Chen, Shuyuan Xu, Yingqiang Ge, Yongfeng Zhang
- Abstract summary: We introduce a framework for achieving counterfactually fair recommendations through adversary learning.
Our method can generate fairer recommendations for users with a desirable recommendation performance.
- Score: 18.5897206797918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems are gaining increasing and critical impacts on human and
society since a growing number of users use them for information seeking and
decision making. Therefore, it is crucial to address the potential unfairness
problems in recommendations. Just like users have personalized preferences on
items, users' demands for fairness are also personalized in many scenarios.
Therefore, it is important to provide personalized fair recommendations for
users to satisfy their personalized fairness demands. Besides, previous works
on fair recommendation mainly focus on association-based fairness. However, it
is important to advance from associative fairness notions to causal fairness
notions for assessing fairness more properly in recommender systems. Based on
the above considerations, this paper focuses on achieving personalized
counterfactual fairness for users in recommender systems. To this end, we
introduce a framework for achieving counterfactually fair recommendations
through adversary learning by generating feature-independent user embeddings
for recommendation. The framework allows recommender systems to achieve
personalized fairness for users while also covering non-personalized
situations. Experiments on two real-world datasets with shallow and deep
recommendation algorithms show that our method can generate fairer
recommendations for users with a desirable recommendation performance.
Related papers
- A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems [2.0932879442844476]
A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
arXiv Detail & Related papers (2022-05-17T12:36:30Z) - A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems [7.3129791870997085]
We introduce FairMatch, a graph-based algorithm that improves exposure fairness for items and suppliers.
A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
arXiv Detail & Related papers (2021-07-07T18:01:26Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Fairness and Transparency in Recommendation: The Users' Perspective [14.830700792215849]
We discuss user perspectives of fairness-aware recommender systems.
We propose three features that could improve user understanding of and trust in fairness-aware recommender systems.
arXiv Detail & Related papers (2021-03-16T00:42:09Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Exploring User Opinions of Fairness in Recommender Systems [13.749884072907163]
We ask users what their ideas of fair treatment in recommendation might be.
We analyze what might cause discrepancies or changes between user's opinions towards fairness.
arXiv Detail & Related papers (2020-03-13T19:44:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.