Intersectional Two-sided Fairness in Recommendation
- URL: http://arxiv.org/abs/2402.02816v2
- Date: Thu, 15 Feb 2024 09:19:41 GMT
- Title: Intersectional Two-sided Fairness in Recommendation
- Authors: Yifan Wang, Peijie Sun, Weizhi Ma, Min Zhang, Yuan Zhang, Peng Jiang,
Shaoping Ma
- Abstract summary: We propose a novel approach called Inter-sectional Two-sided Fairness Recommendation (ITFR)
Our method utilizes a sharpness-aware loss to perceive disadvantaged groups, and then uses collaborative loss balance to develop consistent distinguishing abilities for different intersectional groups.
Our proposed approach effectively alleviates the intersectional two-sided unfairness and consistently outperforms previous state-of-the-art methods.
- Score: 41.96733939002468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness of recommender systems (RS) has attracted increasing attention
recently. Based on the involved stakeholders, the fairness of RS can be divided
into user fairness, item fairness, and two-sided fairness which considers both
user and item fairness simultaneously. However, we argue that the
intersectional two-sided unfairness may still exist even if the RS is two-sided
fair, which is observed and shown by empirical studies on real-world data in
this paper, and has not been well-studied previously. To mitigate this problem,
we propose a novel approach called Intersectional Two-sided Fairness
Recommendation (ITFR). Our method utilizes a sharpness-aware loss to perceive
disadvantaged groups, and then uses collaborative loss balance to develop
consistent distinguishing abilities for different intersectional groups.
Additionally, predicted score normalization is leveraged to align positive
predicted scores to fairly treat positives in different intersectional groups.
Extensive experiments and analyses on three public datasets show that our
proposed approach effectively alleviates the intersectional two-sided
unfairness and consistently outperforms previous state-of-the-art methods.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Fair Reciprocal Recommendation in Matching Markets [0.8287206589886881]
We investigate reciprocal recommendation in two-sided matching markets between agents divided into two sides.
In our model, a match is considered successful only when both individuals express interest in each other.
We introduce its fairness criterion, envy-freeness, from the perspective of fair division theory.
arXiv Detail & Related papers (2024-09-01T13:33:41Z) - An IPW-based Unbiased Ranking Metric in Two-sided Markets [3.845857580909374]
This paper addresses the complex interaction of biases between users in two-sided markets.
We propose a new estimator, named two-sided IPW, to address the position bases in two-sided markets.
arXiv Detail & Related papers (2023-07-14T01:44:03Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Increasing Fairness via Combination with Learning Guarantees [8.314000998551865]
We propose a fairness quality measure named discriminative risk to reflect both individual and group fairness aspects.
We also propose first- and second-order oracle bounds to show that fairness can be boosted via ensemble combination with theoretical learning guarantees.
arXiv Detail & Related papers (2023-01-25T20:31:06Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - "And the Winner Is...": Dynamic Lotteries for Multi-group Fairness-Aware
Recommendation [37.35485045640196]
We argue that the previous literature has been based on simple, uniform and often uni-dimensional notions of fairness assumptions.
We explicitly represent the design decisions that enter into the trade-off between accuracy and fairness across multiply-defined and intersecting protected groups.
We formulate lottery-based mechanisms for choosing between fairness concerns, and demonstrate their performance in two recommendation domains.
arXiv Detail & Related papers (2020-09-05T20:15:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.