A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems
- URL: http://arxiv.org/abs/2107.03415v1
- Date: Wed, 7 Jul 2021 18:01:26 GMT
- Title: A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems
- Authors: Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad
Mobasher, Robin Burke
- Abstract summary: We introduce FairMatch, a graph-based algorithm that improves exposure fairness for items and suppliers.
A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
- Score: 7.3129791870997085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness is a critical system-level objective in recommender systems that has
been the subject of extensive recent research. A specific form of fairness is
supplier exposure fairness where the objective is to ensure equitable coverage
of items across all suppliers in recommendations provided to users. This is
especially important in multistakeholder recommendation scenarios where it may
be important to optimize utilities not just for the end-user, but also for
other stakeholders such as item sellers or producers who desire a fair
representation of their items. This type of supplier fairness is sometimes
accomplished by attempting to increasing aggregate diversity in order to
mitigate popularity bias and to improve the coverage of long-tail items in
recommendations. In this paper, we introduce FairMatch, a general graph-based
algorithm that works as a post processing approach after recommendation
generation to improve exposure fairness for items and suppliers. The algorithm
iteratively adds high quality items that have low visibility or items from
suppliers with low exposure to the users' final recommendation lists. A
comprehensive set of experiments on two datasets and comparison with
state-of-the-art baselines show that FairMatch, while significantly improves
exposure fairness and aggregate diversity, maintains an acceptable level of
relevance of the recommendations.
Related papers
- Bypassing the Popularity Bias: Repurposing Models for Better Long-Tail Recommendation [0.0]
We aim to achieve a more equitable distribution of exposure among publishers on an online content platform.
We propose a novel approach of repurposing existing components of an industrial recommender system to deliver valuable exposure to underrepresented publishers.
arXiv Detail & Related papers (2024-09-17T15:40:55Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Managing multi-facet bias in collaborative filtering recommender systems [0.0]
Biased recommendations across groups of items can endanger the interests of item providers along with causing user dissatisfaction with the system.
This study aims to manage a new type of intersectional bias regarding the geographical origin and popularity of items in the output of state-of-the-art collaborative filtering recommender algorithms.
Extensive experiments on two real-world datasets of movies and books, enriched with the items' continents of production, show that the proposed algorithm strikes a reasonable balance between accuracy and both types of the mentioned biases.
arXiv Detail & Related papers (2023-02-21T10:06:01Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - The Unfairness of Active Users and Popularity Bias in Point-of-Interest
Recommendation [4.578469978594752]
This paper studies the interplay between (i) the unfairness of active users, (ii) the unfairness of popular items, and (iii) the accuracy of recommendation as three angles of our study triangle.
For item fairness, we divide items into short-head, mid-tail, and long-tail groups and study the exposure of these item groups into the top-k recommendation list of users.
Our study shows that most recommendation models cannot satisfy both consumer and producer fairness, indicating a trade-off between these variables possibly due to natural biases in data.
arXiv Detail & Related papers (2022-02-27T08:02:19Z) - Incentives for Item Duplication under Fair Ranking Policies [69.14168955766847]
We study the behaviour of different fair ranking policies in the presence of duplicates.
We find that fairness-aware ranking policies may conflict with diversity, due to their potential to incentivize duplication more than policies solely focused on relevance.
arXiv Detail & Related papers (2021-10-29T11:11:15Z) - Towards Personalized Fairness based on Causal Notion [18.5897206797918]
We introduce a framework for achieving counterfactually fair recommendations through adversary learning.
Our method can generate fairer recommendations for users with a desirable recommendation performance.
arXiv Detail & Related papers (2021-05-20T15:24:34Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Opportunistic Multi-aspect Fairness through Personalized Re-ranking [5.8562079474220665]
We present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions.
We show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches.
arXiv Detail & Related papers (2020-05-21T04:25:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.