FairRec: Two-Sided Fairness for Personalized Recommendations in
Two-Sided Platforms
- URL: http://arxiv.org/abs/2002.10764v2
- Date: Tue, 23 Jun 2020 12:54:52 GMT
- Title: FairRec: Two-Sided Fairness for Personalized Recommendations in
Two-Sided Platforms
- Authors: Gourab K Patro, Arpita Biswas, Niloy Ganguly, Krishna P. Gummadi,
Abhijnan Chakraborty
- Abstract summary: We investigate the problem of fair recommendation in the context of two-sided online platforms.
Our approach involves a novel mapping of the fair recommendation problem to a constrained version of the problem of fairly allocating indivisible goods.
Our proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer.
- Score: 36.35034531426411
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the problem of fair recommendation in the context of two-sided
online platforms, comprising customers on one side and producers on the other.
Traditionally, recommendation services in these platforms have focused on
maximizing customer satisfaction by tailoring the results according to the
personalized preferences of individual customers. However, our investigation
reveals that such customer-centric design may lead to unfair distribution of
exposure among the producers, which may adversely impact their well-being. On
the other hand, a producer-centric design might become unfair to the customers.
Thus, we consider fairness issues that span both customers and producers. Our
approach involves a novel mapping of the fair recommendation problem to a
constrained version of the problem of fairly allocating indivisible goods. Our
proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure
for most of the producers and Envy-Free up to One item (EF1) fairness for every
customer. Extensive evaluations over multiple real-world datasets show the
effectiveness of FairRec in ensuring two-sided fairness while incurring a
marginal loss in the overall recommendation quality.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large
Language Model Recommendation [52.62492168507781]
We propose a novel benchmark called Fairness of Recommendation via LLM (FaiRLLM)
This benchmark comprises carefully crafted metrics and a dataset that accounts for eight sensitive attributes.
By utilizing our FaiRLLM benchmark, we conducted an evaluation of ChatGPT and discovered that it still exhibits unfairness to some sensitive attributes when generating recommendations.
arXiv Detail & Related papers (2023-05-12T16:54:36Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - CPFair: Personalized Consumer and Producer Fairness Re-ranking for
Recommender Systems [5.145741425164946]
We present an optimization-based re-ranking approach that seamlessly integrates fairness constraints from both the consumer and producer-side.
We demonstrate through large-scale experiments on 8 datasets that our proposed method is capable of improving both consumer and producer fairness without reducing overall recommendation quality.
arXiv Detail & Related papers (2022-04-17T20:38:02Z) - The Unfairness of Active Users and Popularity Bias in Point-of-Interest
Recommendation [4.578469978594752]
This paper studies the interplay between (i) the unfairness of active users, (ii) the unfairness of popular items, and (iii) the accuracy of recommendation as three angles of our study triangle.
For item fairness, we divide items into short-head, mid-tail, and long-tail groups and study the exposure of these item groups into the top-k recommendation list of users.
Our study shows that most recommendation models cannot satisfy both consumer and producer fairness, indicating a trade-off between these variables possibly due to natural biases in data.
arXiv Detail & Related papers (2022-02-27T08:02:19Z) - Towards Fair Recommendation in Two-Sided Platforms [36.35034531426411]
We propose a fair personalized recommendation problem to a constrained version of the problem of fairly allocating indivisible goods.
Our proposed em FairRec algorithm guarantees Maxi-Min Share ($alpha$-MMS) of exposure for the producers, and Envy-Free up to One Item (EF1) fairness for the customers.
arXiv Detail & Related papers (2021-12-26T05:14:56Z) - A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems [7.3129791870997085]
We introduce FairMatch, a graph-based algorithm that improves exposure fairness for items and suppliers.
A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
arXiv Detail & Related papers (2021-07-07T18:01:26Z) - TFROM: A Two-sided Fairness-Aware Recommendation Model for Both
Customers and Providers [10.112208859874618]
We design a two-sided fairness-aware recommendation model (TFROM) for both customers and providers.
Experiments show that TFROM provides better two-sided fairness while still maintaining a higher level of personalization than the baseline algorithms.
arXiv Detail & Related papers (2021-04-19T02:46:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.