Towards Fair Personalization by Avoiding Feedback Loops
- URL: http://arxiv.org/abs/2012.12862v1
- Date: Sun, 20 Dec 2020 19:28:57 GMT
- Title: Towards Fair Personalization by Avoiding Feedback Loops
- Authors: G\"okhan \c{C}apan, \"Ozge Bozal, \.Ilker G\"undo\u{g}du, Ali Taylan
Cemgil
- Abstract summary: Self-reinforcing feedback loops are cause and effect of over and/or under-presentation of some content in interactive recommender systems.
We consider two models that explicitly incorporate, or ignore the systematic and limited exposure to alternatives.
- Score: 3.180077164673223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-reinforcing feedback loops are both cause and effect of over and/or
under-presentation of some content in interactive recommender systems. This
leads to erroneous user preference estimates, namely, overestimation of
over-presented content while violating the right to be presented of each
alternative, contrary of which we define as a fair system. We consider two
models that explicitly incorporate, or ignore the systematic and limited
exposure to alternatives. By simulations, we demonstrate that ignoring the
systematic presentations overestimates promoted options and underestimates
censored alternatives. Simply conditioning on the limited exposure is a remedy
for these biases.
Related papers
- Mitigating Exposure Bias in Online Learning to Rank Recommendation: A Novel Reward Model for Cascading Bandits [23.15042648884445]
We study exposure bias in a class of well-known contextual bandit algorithms known as Linear Cascading Bandits.
We propose an Exposure-Aware reward model that updates the model parameters based on two factors: 1) implicit user feedback and 2) the position of the item in the recommendation list.
arXiv Detail & Related papers (2024-08-08T09:35:01Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - A First Look at Selection Bias in Preference Elicitation for Recommendation [64.44255178199846]
We study the effect of selection bias in preference elicitation on the resulting recommendations.
A big hurdle is the lack of any publicly available dataset that has preference elicitation interactions.
We propose a simulation of a topic-based preference elicitation process.
arXiv Detail & Related papers (2024-05-01T14:56:56Z) - Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference [50.95521705711802]
Previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model.
This paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference.
We propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect.
arXiv Detail & Related papers (2024-04-30T15:20:41Z) - Debiasing Recommendation by Learning Identifiable Latent Confounders [49.16119112336605]
Confounding bias arises due to the presence of unmeasured variables that can affect both a user's exposure and feedback.
Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure.
We propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables to resolve the aforementioned non-identification issue.
arXiv Detail & Related papers (2023-02-10T05:10:26Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Long-term Dynamics of Fairness Intervention in Connection Recommender
Systems [5.048563042541915]
We study a connection recommender system patterned after the systems employed by web-scale social networks.
We find that, although seemingly fair in aggregate, common exposure and utility parity interventions fail to mitigate amplification of biases in the long term.
arXiv Detail & Related papers (2022-03-30T16:27:48Z) - Correcting the User Feedback-Loop Bias for Recommendation Systems [34.44834423714441]
We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
arXiv Detail & Related papers (2021-09-13T15:02:55Z) - Adversarial Counterfactual Learning and Evaluation for Recommender
System [33.44276155380476]
We show in theory that applying supervised learning to detect user preferences may end up with inconsistent results in the absence of exposure information.
We propose a principled solution by introducing a minimax empirical risk formulation.
arXiv Detail & Related papers (2020-11-08T00:40:51Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.