Correcting Exposure Bias for Link Recommendation
- URL: http://arxiv.org/abs/2106.07041v1
- Date: Sun, 13 Jun 2021 16:51:41 GMT
- Title: Correcting Exposure Bias for Link Recommendation
- Authors: Shantanu Gupta, Hao Wang, Zachary C. Lipton, Yuyang Wang
- Abstract summary: Exposure bias can arise when users are systematically underexposed to certain relevant items.
We propose estimators that leverage known exposure probabilities to mitigate this bias.
Our methods lead to greater diversity in the recommended papers' fields of study.
- Score: 31.799185352323807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Link prediction methods are frequently applied in recommender systems, e.g.,
to suggest citations for academic papers or friends in social networks.
However, exposure bias can arise when users are systematically underexposed to
certain relevant items. For example, in citation networks, authors might be
more likely to encounter papers from their own field and thus cite them
preferentially. This bias can propagate through naively trained link
predictors, leading to both biased evaluation and high generalization error (as
assessed by true relevance). Moreover, this bias can be exacerbated by feedback
loops. We propose estimators that leverage known exposure probabilities to
mitigate this bias and consequent feedback loops. Next, we provide a loss
function for learning the exposure probabilities from data. Finally,
experiments on semi-synthetic data based on real-world citation networks, show
that our methods reliably identify (truly) relevant citations. Additionally,
our methods lead to greater diversity in the recommended papers' fields of
study. The code is available at
https://github.com/shantanu95/exposure-bias-link-rec.
Related papers
- Language-guided Detection and Mitigation of Unknown Dataset Bias [23.299264313976213]
We propose a framework to identify potential biases as keywords without prior knowledge based on the partial occurrence in the captions.
Our framework not only outperforms existing methods without prior knowledge, but also is even comparable with a method that assumes prior knowledge.
arXiv Detail & Related papers (2024-06-05T03:11:33Z) - Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction [56.17020601803071]
Recent research shows that pre-trained language models (PLMs) suffer from "prompt bias" in factual knowledge extraction.
This paper aims to improve the reliability of existing benchmarks by thoroughly investigating and mitigating prompt bias.
arXiv Detail & Related papers (2024-03-15T02:04:35Z) - Robustly Improving Bandit Algorithms with Confounded and Selection
Biased Offline Data: A Causal Approach [18.13887411913371]
This paper studies bandit problems where an agent has access to offline data that might be utilized to potentially improve the estimation of each arm's reward distribution.
We categorize the biases into confounding bias and selection bias based on the causal structure they imply.
We extract the causal bound for each arm that is robust towards compound biases from biased observational data.
arXiv Detail & Related papers (2023-12-20T03:03:06Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Towards Automatic Bias Detection in Knowledge Graphs [5.402498799294428]
We describe a framework for identifying biases in knowledge graph embeddings, based on numerical bias metrics.
We illustrate the framework with three different bias measures on the task of profession prediction.
The relations flagged as biased can then be handed to decision makers for judgement upon subsequent debiasing.
arXiv Detail & Related papers (2021-09-19T03:58:25Z) - Uncovering Latent Biases in Text: Method and Application to Peer Review [38.726731935235584]
We introduce a novel framework to quantify bias in text caused by the visibility of subgroup membership indicators.
We apply our framework to quantify biases in the text of peer reviews from a reputed machine learning conference.
arXiv Detail & Related papers (2020-10-29T01:24:19Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.