Joint Multisided Exposure Fairness for Recommendation
- URL: http://arxiv.org/abs/2205.00048v1
- Date: Fri, 29 Apr 2022 19:13:23 GMT
- Title: Joint Multisided Exposure Fairness for Recommendation
- Authors: Haolun Wu, Bhaskar Mitra, Chen Ma, Fernando Diaz and Xue Liu
- Abstract summary: This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
- Score: 76.75990595228666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior research on exposure fairness in the context of recommender systems has
focused mostly on disparities in the exposure of individual or groups of items
to individual users of the system. The problem of how individual or groups of
items may be systemically under or over exposed to groups of users, or even all
users, has received relatively less attention. However, such systemic
disparities in information exposure can result in observable social harms, such
as withholding economic opportunities from historically marginalized groups
(allocative harm) or amplifying gendered and racialized stereotypes
(representational harm). Previously, Diaz et al. developed the expected
exposure metric -- that incorporates existing user browsing models that have
previously been developed for information retrieval -- to study fairness of
content exposure to individual users. We extend their proposed framework to
formalize a family of exposure fairness metrics that model the problem jointly
from the perspective of both the consumers and producers. Specifically, we
consider group attributes for both types of stakeholders to identify and
mitigate fairness concerns that go beyond individual users and items towards
more systemic biases in recommendation. Furthermore, we study and discuss the
relationships between the different exposure fairness dimensions proposed in
this paper, as well as demonstrate how stochastic ranking policies can be
optimized towards said fairness goals.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
When using machine learning to aid decision-making, it is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
In cases where systematic differences between groups play a significant role in outcomes, these methods may overlook the influence of non-protected variables.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Causal Disentanglement with Network Information for Debiased
Recommendations [34.698181166037564]
Recent research proposes to debias by modeling a recommender system from a causal perspective.
The critical challenge in this setting is accounting for the hidden confounders.
We propose to leverage network information (i.e., user-social and user-item networks) to better approximate hidden confounders.
arXiv Detail & Related papers (2022-04-14T20:55:11Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems [7.3129791870997085]
We introduce FairMatch, a graph-based algorithm that improves exposure fairness for items and suppliers.
A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
arXiv Detail & Related papers (2021-07-07T18:01:26Z) - Towards Fair Personalization by Avoiding Feedback Loops [3.180077164673223]
Self-reinforcing feedback loops are cause and effect of over and/or under-presentation of some content in interactive recommender systems.
We consider two models that explicitly incorporate, or ignore the systematic and limited exposure to alternatives.
arXiv Detail & Related papers (2020-12-20T19:28:57Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.