Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems
- URL: http://arxiv.org/abs/2205.08289v1
- Date: Tue, 17 May 2022 12:36:30 GMT
- Title: Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems
- Authors: Hossein A. Rahmani, Mohammadmehdi Naghiaei, Mahdi Dehghan, Mohammad
Aliannejadi
- Abstract summary: A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
- Score: 2.0932879442844476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work in recommender systems mainly focuses on fairness in
recommendations as an important aspect of measuring recommendations quality. A
fairness-aware recommender system aims to treat different user groups
similarly. Relevant work on user-oriented fairness highlights the
discriminative behavior of fairness-unaware recommendation algorithms towards a
certain user group, defined based on users' activity level. Typical solutions
include proposing a user-centered fairness re-ranking framework applied on top
of a base ranking model to mitigate its unfair behavior towards a certain user
group i.e., disadvantaged group. In this paper, we re-produce a user-oriented
fairness study and provide extensive experiments to analyze the dependency of
their proposed method on various fairness and recommendation aspects, including
the recommendation domain, nature of the base ranking model, and user grouping
method. Moreover, we evaluate the final recommendations provided by the
re-ranking framework from both user- (e.g., NDCG, user-fairness) and item-side
(e.g., novelty, item-fairness) metrics. We discover interesting trends and
trade-offs between the model's performance in terms of different evaluation
metrics. For instance, we see that the definition of the
advantaged/disadvantaged user groups plays a crucial role in the effectiveness
of the fairness algorithm and how it improves the performance of specific base
ranking models. Finally, we highlight some important open challenges and future
directions in this field. We release the data, evaluation pipeline, and the
trained models publicly on https://github.com/rahmanidashti/FairRecSys.
Related papers
- A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Equal Experience in Recommender Systems [21.298427869586686]
We introduce a novel fairness notion (that we call equal experience) to regulate unfairness in the presence of biased data.
We propose an optimization framework that incorporates the fairness notion as a regularization term, as well as introduce computationally-efficient algorithms that solve the optimization.
arXiv Detail & Related papers (2022-10-12T05:53:05Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - The Unfairness of Active Users and Popularity Bias in Point-of-Interest
Recommendation [4.578469978594752]
This paper studies the interplay between (i) the unfairness of active users, (ii) the unfairness of popular items, and (iii) the accuracy of recommendation as three angles of our study triangle.
For item fairness, we divide items into short-head, mid-tail, and long-tail groups and study the exposure of these item groups into the top-k recommendation list of users.
Our study shows that most recommendation models cannot satisfy both consumer and producer fairness, indicating a trade-off between these variables possibly due to natural biases in data.
arXiv Detail & Related papers (2022-02-27T08:02:19Z) - FEBR: Expert-Based Recommendation Framework for beneficial and
personalized content [77.86290991564829]
We propose FEBR (Expert-Based Recommendation Framework), an apprenticeship learning framework to assess the quality of the recommended content.
The framework exploits the demonstrated trajectories of an expert (assumed to be reliable) in a recommendation evaluation environment, to recover an unknown utility function.
We evaluate the performance of our solution through a user interest simulation environment (using RecSim)
arXiv Detail & Related papers (2021-07-17T18:21:31Z) - Towards Personalized Fairness based on Causal Notion [18.5897206797918]
We introduce a framework for achieving counterfactually fair recommendations through adversary learning.
Our method can generate fairer recommendations for users with a desirable recommendation performance.
arXiv Detail & Related papers (2021-05-20T15:24:34Z) - User-oriented Fairness in Recommendation [21.651482297198687]
We address the unfairness problem in recommender systems from the user perspective.
We group users into advantaged and disadvantaged groups according to their level of activity.
Our approach can not only improve group fairness of users in recommender systems, but also achieve better overall recommendation performance.
arXiv Detail & Related papers (2021-04-21T17:50:31Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.