Equal Experience in Recommender Systems
- URL: http://arxiv.org/abs/2210.05936v1
- Date: Wed, 12 Oct 2022 05:53:05 GMT
- Title: Equal Experience in Recommender Systems
- Authors: Jaewoong Cho, Moonseok Choi, Changho Suh
- Abstract summary: We introduce a novel fairness notion (that we call equal experience) to regulate unfairness in the presence of biased data.
We propose an optimization framework that incorporates the fairness notion as a regularization term, as well as introduce computationally-efficient algorithms that solve the optimization.
- Score: 21.298427869586686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the fairness issue that arises in recommender systems. Biased data
due to inherent stereotypes of particular groups (e.g., male students' average
rating on mathematics is often higher than that on humanities, and vice versa
for females) may yield a limited scope of suggested items to a certain group of
users. Our main contribution lies in the introduction of a novel fairness
notion (that we call equal experience), which can serve to regulate such
unfairness in the presence of biased data. The notion captures the degree of
the equal experience of item recommendations across distinct groups. We propose
an optimization framework that incorporates the fairness notion as a
regularization term, as well as introduce computationally-efficient algorithms
that solve the optimization. Experiments on synthetic and benchmark real
datasets demonstrate that the proposed framework can indeed mitigate such
unfairness while exhibiting a minor degradation of recommendation accuracy.
Related papers
- Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems [2.0932879442844476]
A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
arXiv Detail & Related papers (2022-05-17T12:36:30Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - User-oriented Fairness in Recommendation [21.651482297198687]
We address the unfairness problem in recommender systems from the user perspective.
We group users into advantaged and disadvantaged groups according to their level of activity.
Our approach can not only improve group fairness of users in recommender systems, but also achieve better overall recommendation performance.
arXiv Detail & Related papers (2021-04-21T17:50:31Z) - Achieving Fairness via Post-Processing in Web-Scale Recommender Systems [6.5191290612443105]
We extend the definitions of fairness to recommender systems, namely equality of opportunity and equalized odds.
We propose scalable methods for achieving equality of opportunity and equalized odds in rankings in the presence of position bias.
arXiv Detail & Related papers (2020-06-19T20:12:13Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.