Opportunistic Multi-aspect Fairness through Personalized Re-ranking
- URL: http://arxiv.org/abs/2005.12974v1
- Date: Thu, 21 May 2020 04:25:20 GMT
- Title: Opportunistic Multi-aspect Fairness through Personalized Re-ranking
- Authors: Nasim Sonboli, Farzad Eskandanian, Robin Burke, Weiwen Liu, Bamshad
Mobasher
- Abstract summary: We present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions.
We show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches.
- Score: 5.8562079474220665
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As recommender systems have become more widespread and moved into areas with
greater social impact, such as employment and housing, researchers have begun
to seek ways to ensure fairness in the results that such systems produce. This
work has primarily focused on developing recommendation approaches in which
fairness metrics are jointly optimized along with recommendation accuracy.
However, the previous work had largely ignored how individual preferences may
limit the ability of an algorithm to produce fair recommendations. Furthermore,
with few exceptions, researchers have only considered scenarios in which
fairness is measured relative to a single sensitive feature or attribute (such
as race or gender). In this paper, we present a re-ranking approach to
fairness-aware recommendation that learns individual preferences across
multiple fairness dimensions and uses them to enhance provider fairness in
recommendation results. Specifically, we show that our opportunistic and
metric-agnostic approach achieves a better trade-off between accuracy and
fairness than prior re-ranking approaches and does so across multiple fairness
dimensions.
Related papers
- Fair Reciprocal Recommendation in Matching Markets [0.8287206589886881]
We investigate reciprocal recommendation in two-sided matching markets between agents divided into two sides.
In our model, a match is considered successful only when both individuals express interest in each other.
We introduce its fairness criterion, envy-freeness, from the perspective of fair division theory.
arXiv Detail & Related papers (2024-09-01T13:33:41Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems [2.0932879442844476]
A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
arXiv Detail & Related papers (2022-05-17T12:36:30Z) - A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems [7.3129791870997085]
We introduce FairMatch, a graph-based algorithm that improves exposure fairness for items and suppliers.
A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
arXiv Detail & Related papers (2021-07-07T18:01:26Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - "And the Winner Is...": Dynamic Lotteries for Multi-group Fairness-Aware
Recommendation [37.35485045640196]
We argue that the previous literature has been based on simple, uniform and often uni-dimensional notions of fairness assumptions.
We explicitly represent the design decisions that enter into the trade-off between accuracy and fairness across multiply-defined and intersecting protected groups.
We formulate lottery-based mechanisms for choosing between fairness concerns, and demonstrate their performance in two recommendation domains.
arXiv Detail & Related papers (2020-09-05T20:15:14Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.