The Unfairness of Active Users and Popularity Bias in Point-of-Interest
Recommendation
- URL: http://arxiv.org/abs/2202.13307v1
- Date: Sun, 27 Feb 2022 08:02:19 GMT
- Title: The Unfairness of Active Users and Popularity Bias in Point-of-Interest
Recommendation
- Authors: Hossein A. Rahmani, Yashar Deldjoo, Ali Tourani, Mohammadmehdi
Naghiaei
- Abstract summary: This paper studies the interplay between (i) the unfairness of active users, (ii) the unfairness of popular items, and (iii) the accuracy of recommendation as three angles of our study triangle.
For item fairness, we divide items into short-head, mid-tail, and long-tail groups and study the exposure of these item groups into the top-k recommendation list of users.
Our study shows that most recommendation models cannot satisfy both consumer and producer fairness, indicating a trade-off between these variables possibly due to natural biases in data.
- Score: 4.578469978594752
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point-of-Interest (POI) recommender systems provide personalized
recommendations to users and help businesses attract potential customers.
Despite their success, recent studies suggest that highly data-driven
recommendations could be impacted by data biases, resulting in unfair outcomes
for different stakeholders, mainly consumers (users) and providers (items).
Most existing fairness-related research works in recommender systems treat user
fairness and item fairness issues individually, disregarding that RS work in a
two-sided marketplace. This paper studies the interplay between (i) the
unfairness of active users, (ii) the unfairness of popular items, and (iii) the
accuracy (personalization) of recommendation as three angles of our study
triangle. We group users into advantaged and disadvantaged levels to measure
user fairness based on their activity level. For item fairness, we divide items
into short-head, mid-tail, and long-tail groups and study the exposure of these
item groups into the top-k recommendation list of users. Experimental
validation of eight different recommendation models commonly used for POI
recommendation (e.g., contextual, CF) on two publicly available POI
recommendation datasets, Gowalla and Yelp, indicate that most well-performing
models suffer seriously from the unfairness of popularity bias (provider
unfairness). Furthermore, our study shows that most recommendation models
cannot satisfy both consumer and producer fairness, indicating a trade-off
between these variables possibly due to natural biases in data. We choose the
POI recommendation as our test scenario; however, the insights should be
trivially extendable on other domains.
Related papers
- Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems [2.0932879442844476]
A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
arXiv Detail & Related papers (2022-05-17T12:36:30Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems [7.3129791870997085]
We introduce FairMatch, a graph-based algorithm that improves exposure fairness for items and suppliers.
A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
arXiv Detail & Related papers (2021-07-07T18:01:26Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Towards Personalized Fairness based on Causal Notion [18.5897206797918]
We introduce a framework for achieving counterfactually fair recommendations through adversary learning.
Our method can generate fairer recommendations for users with a desirable recommendation performance.
arXiv Detail & Related papers (2021-05-20T15:24:34Z) - User-oriented Fairness in Recommendation [21.651482297198687]
We address the unfairness problem in recommender systems from the user perspective.
We group users into advantaged and disadvantaged groups according to their level of activity.
Our approach can not only improve group fairness of users in recommender systems, but also achieve better overall recommendation performance.
arXiv Detail & Related papers (2021-04-21T17:50:31Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - FairRec: Two-Sided Fairness for Personalized Recommendations in
Two-Sided Platforms [36.35034531426411]
We investigate the problem of fair recommendation in the context of two-sided online platforms.
Our approach involves a novel mapping of the fair recommendation problem to a constrained version of the problem of fairly allocating indivisible goods.
Our proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer.
arXiv Detail & Related papers (2020-02-25T09:43:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.