Balancing Unobserved Confounding with a Few Unbiased Ratings in Debiased
Recommendations
- URL: http://arxiv.org/abs/2304.09085v1
- Date: Mon, 17 Apr 2023 08:56:55 GMT
- Title: Balancing Unobserved Confounding with a Few Unbiased Ratings in Debiased
Recommendations
- Authors: Haoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu
- Abstract summary: We propose a theoretically guaranteed model-agnostic balancing approach that can be applied to any existing debiasing method.
The proposed approach makes full use of unbiased data by alternatively correcting model parameters learned with biased data, and adaptively learning balance coefficients of biased samples for further debiasing.
- Score: 4.960902915238239
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recommender systems are seen as an effective tool to address information
overload, but it is widely known that the presence of various biases makes
direct training on large-scale observational data result in sub-optimal
prediction performance. In contrast, unbiased ratings obtained from randomized
controlled trials or A/B tests are considered to be the golden standard, but
are costly and small in scale in reality. To exploit both types of data, recent
works proposed to use unbiased ratings to correct the parameters of the
propensity or imputation models trained on the biased dataset. However, the
existing methods fail to obtain accurate predictions in the presence of
unobserved confounding or model misspecification. In this paper, we propose a
theoretically guaranteed model-agnostic balancing approach that can be applied
to any existing debiasing method with the aim of combating unobserved
confounding and model misspecification. The proposed approach makes full use of
unbiased data by alternatively correcting model parameters learned with biased
data, and adaptively learning balance coefficients of biased samples for
further debiasing. Extensive real-world experiments are conducted along with
the deployment of our proposal on four representative debiasing methods to
demonstrate the effectiveness.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Inference-Time Selective Debiasing [27.578390085427156]
We propose selective debiasing -- an inference-time safety mechanism that aims to increase the overall quality of models.
We identify the potentially biased model predictions and, instead of discarding them, we debias them using LEACE -- a post-processing debiasing method.
Experiments with text classification datasets demonstrate that selective debiasing helps to close the performance gap between post-processing methods and at-training and pre-processing debiasing techniques.
arXiv Detail & Related papers (2024-07-27T21:56:23Z) - Looking at Model Debiasing through the Lens of Anomaly Detection [11.113718994341733]
Deep neural networks are sensitive to bias in the data.
We propose a new bias identification method based on anomaly detection.
We reach state-of-the-art performance on synthetic and real benchmark datasets.
arXiv Detail & Related papers (2024-07-24T17:30:21Z) - Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Improving Bias Mitigation through Bias Experts in Natural Language
Understanding [10.363406065066538]
We propose a new debiasing framework that introduces binary classifiers between the auxiliary model and the main model.
Our proposed strategy improves the bias identification ability of the auxiliary model.
arXiv Detail & Related papers (2023-12-06T16:15:00Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Uncertainty Calibration for Ensemble-Based Debiasing Methods [27.800387167841972]
In this paper, we focus on the bias-only model in ensemble-based debiasing methods.
We show that the debiasing performance can be damaged by inaccurate uncertainty estimations of the bias-only model.
Motivated by these findings, we propose to conduct calibration on the bias-only model, thus achieving a three-stage ensemble-based debiasing framework.
arXiv Detail & Related papers (2021-11-07T15:13:32Z) - Towards Debiasing NLU Models from Unknown Biases [70.31427277842239]
NLU models often exploit biases to achieve high dataset-specific performance without properly learning the intended task.
We present a self-debiasing framework that prevents models from mainly utilizing biases without knowing them in advance.
arXiv Detail & Related papers (2020-09-25T15:49:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.