Towards Fair Personalization by Avoiding Feedback Loops
- URL: http://arxiv.org/abs/2012.12862v1
- Date: Sun, 20 Dec 2020 19:28:57 GMT
- Title: Towards Fair Personalization by Avoiding Feedback Loops
- Authors: G\"okhan \c{C}apan, \"Ozge Bozal, \.Ilker G\"undo\u{g}du, Ali Taylan
Cemgil
- Abstract summary: Self-reinforcing feedback loops are cause and effect of over and/or under-presentation of some content in interactive recommender systems.
We consider two models that explicitly incorporate, or ignore the systematic and limited exposure to alternatives.
- Score: 3.180077164673223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-reinforcing feedback loops are both cause and effect of over and/or
under-presentation of some content in interactive recommender systems. This
leads to erroneous user preference estimates, namely, overestimation of
over-presented content while violating the right to be presented of each
alternative, contrary of which we define as a fair system. We consider two
models that explicitly incorporate, or ignore the systematic and limited
exposure to alternatives. By simulations, we demonstrate that ignoring the
systematic presentations overestimates promoted options and underestimates
censored alternatives. Simply conditioning on the limited exposure is a remedy
for these biases.
Related papers
- Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Content-Agnostic Moderation for Stance-Neutral Recommendation [13.210645250173997]
Content-agnostic moderation does not rely on the actual content being moderated, arguably making it less prone to forms of censorship.
We introduce two novel content-agnostic moderation methods that modify the recommendations from the content recommender to disperse user-item co-clusters without relying on content features.
Our results indicate that achieving stance neutrality without direct content information is not only feasible but can also help in developing more balanced and informative recommendation systems without substantially degrading user engagement.
arXiv Detail & Related papers (2024-05-29T09:50:39Z) - Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference [50.95521705711802]
Previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model.
This paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference.
We propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect.
arXiv Detail & Related papers (2024-04-30T15:20:41Z) - Debiasing Recommendation by Learning Identifiable Latent Confounders [49.16119112336605]
Confounding bias arises due to the presence of unmeasured variables that can affect both a user's exposure and feedback.
Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure.
We propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables to resolve the aforementioned non-identification issue.
arXiv Detail & Related papers (2023-02-10T05:10:26Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Long-term Dynamics of Fairness Intervention in Connection Recommender
Systems [5.048563042541915]
We study a connection recommender system patterned after the systems employed by web-scale social networks.
We find that, although seemingly fair in aggregate, common exposure and utility parity interventions fail to mitigate amplification of biases in the long term.
arXiv Detail & Related papers (2022-03-30T16:27:48Z) - Correcting the User Feedback-Loop Bias for Recommendation Systems [34.44834423714441]
We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
arXiv Detail & Related papers (2021-09-13T15:02:55Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Adversarial Counterfactual Learning and Evaluation for Recommender
System [33.44276155380476]
We show in theory that applying supervised learning to detect user preferences may end up with inconsistent results in the absence of exposure information.
We propose a principled solution by introducing a minimax empirical risk formulation.
arXiv Detail & Related papers (2020-11-08T00:40:51Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.