Improving Recommendation Fairness via Data Augmentation
- URL: http://arxiv.org/abs/2302.06333v1
- Date: Mon, 13 Feb 2023 13:11:46 GMT
- Title: Improving Recommendation Fairness via Data Augmentation
- Authors: Lei Chen, Le Wu, Kun Zhang, Richang Hong, Defu Lian, Zhiqiang Zhang,
Jun Zhou, Meng Wang
- Abstract summary: Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
- Score: 66.4071365614835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collaborative filtering based recommendation learns users' preferences from
all users' historical behavior data, and has been popular to facilitate
decision making. R Recently, the fairness issue of recommendation has become
more and more essential. A recommender system is considered unfair when it does
not perform equally well for different user groups according to users'
sensitive attributes~(e.g., gender, race). Plenty of methods have been proposed
to alleviate unfairness by optimizing a predefined fairness goal or changing
the distribution of unbalanced training data. However, they either suffered
from the specific fairness optimization metrics or relied on redesigning the
current recommendation architecture. In this paper, we study how to improve
recommendation fairness from the data augmentation perspective. The
recommendation model amplifies the inherent unfairness of imbalanced training
data. We augment imbalanced training data towards balanced data distribution to
improve fairness. The proposed framework is generally applicable to any
embedding-based recommendation, and does not need to pre-define a fairness
metric. Extensive experiments on two real-world datasets clearly demonstrate
the superiority of our proposed framework. We publish the source code at
https://github.com/newlei/FDA.
Related papers
- Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - FairRoad: Achieving Fairness for Recommender Systems with Optimized
Antidote Data [15.555228739298045]
We propose a new approach called fair recommendation with optimized antidote data (FairRoad)
Our proposed antidote data generation algorithm significantly improve the fairness of recommender systems with a small amounts of antidote data.
arXiv Detail & Related papers (2022-12-13T17:32:44Z) - Equal Experience in Recommender Systems [21.298427869586686]
We introduce a novel fairness notion (that we call equal experience) to regulate unfairness in the presence of biased data.
We propose an optimization framework that incorporates the fairness notion as a regularization term, as well as introduce computationally-efficient algorithms that solve the optimization.
arXiv Detail & Related papers (2022-10-12T05:53:05Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - Bias-Tolerant Fair Classification [20.973916494320246]
label bias and selection bias are two reasons in data that will hinder the fairness of machine-learning outcomes.
We propose a Bias-TolerantFAirRegularizedLoss (B-FARL) which tries to regain the benefits using data affected by label bias and selection bias.
B-FARL takes the biased data as input, calls a model that approximates the one trained with fair but latent data, and thus prevents discrimination without constraints required.
arXiv Detail & Related papers (2021-07-07T13:31:38Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.