TFROM: A Two-sided Fairness-Aware Recommendation Model for Both
Customers and Providers
- URL: http://arxiv.org/abs/2104.09024v1
- Date: Mon, 19 Apr 2021 02:46:54 GMT
- Title: TFROM: A Two-sided Fairness-Aware Recommendation Model for Both
Customers and Providers
- Authors: Yao Wu and Jian Cao and Guandong Xu and Yudong Tan
- Abstract summary: We design a two-sided fairness-aware recommendation model (TFROM) for both customers and providers.
Experiments show that TFROM provides better two-sided fairness while still maintaining a higher level of personalization than the baseline algorithms.
- Score: 10.112208859874618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: At present, most research on the fairness of recommender systems is conducted
either from the perspective of customers or from the perspective of product (or
service) providers. However, such a practice ignores the fact that when
fairness is guaranteed to one side, the fairness and rights of the other side
are likely to reduce. In this paper, we consider recommendation scenarios from
the perspective of two sides (customers and providers). From the perspective of
providers, we consider the fairness of the providers' exposure in recommender
system. For customers, we consider the fairness of the reduced quality of
recommendation results due to the introduction of fairness measures. We
theoretically analyzed the relationship between recommendation quality,
customers fairness, and provider fairness, and design a two-sided
fairness-aware recommendation model (TFROM) for both customers and providers.
Specifically, we design two versions of TFROM for offline and online
recommendation. The effectiveness of the model is verified on three real-world
data sets. The experimental results show that TFROM provides better two-sided
fairness while still maintaining a higher level of personalization than the
baseline algorithms.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - CPFair: Personalized Consumer and Producer Fairness Re-ranking for
Recommender Systems [5.145741425164946]
We present an optimization-based re-ranking approach that seamlessly integrates fairness constraints from both the consumer and producer-side.
We demonstrate through large-scale experiments on 8 datasets that our proposed method is capable of improving both consumer and producer fairness without reducing overall recommendation quality.
arXiv Detail & Related papers (2022-04-17T20:38:02Z) - The Unfairness of Active Users and Popularity Bias in Point-of-Interest
Recommendation [4.578469978594752]
This paper studies the interplay between (i) the unfairness of active users, (ii) the unfairness of popular items, and (iii) the accuracy of recommendation as three angles of our study triangle.
For item fairness, we divide items into short-head, mid-tail, and long-tail groups and study the exposure of these item groups into the top-k recommendation list of users.
Our study shows that most recommendation models cannot satisfy both consumer and producer fairness, indicating a trade-off between these variables possibly due to natural biases in data.
arXiv Detail & Related papers (2022-02-27T08:02:19Z) - Towards Fair Recommendation in Two-Sided Platforms [36.35034531426411]
We propose a fair personalized recommendation problem to a constrained version of the problem of fairly allocating indivisible goods.
Our proposed em FairRec algorithm guarantees Maxi-Min Share ($alpha$-MMS) of exposure for the producers, and Envy-Free up to One Item (EF1) fairness for the customers.
arXiv Detail & Related papers (2021-12-26T05:14:56Z) - A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems [7.3129791870997085]
We introduce FairMatch, a graph-based algorithm that improves exposure fairness for items and suppliers.
A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
arXiv Detail & Related papers (2021-07-07T18:01:26Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - FairRec: Two-Sided Fairness for Personalized Recommendations in
Two-Sided Platforms [36.35034531426411]
We investigate the problem of fair recommendation in the context of two-sided online platforms.
Our approach involves a novel mapping of the fair recommendation problem to a constrained version of the problem of fairly allocating indivisible goods.
Our proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer.
arXiv Detail & Related papers (2020-02-25T09:43:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.