Debiasing Recommendation by Learning Identifiable Latent Confounders
- URL: http://arxiv.org/abs/2302.05052v2
- Date: Thu, 15 Jun 2023 08:21:32 GMT
- Title: Debiasing Recommendation by Learning Identifiable Latent Confounders
- Authors: Qing Zhang, Xiaoying Zhang, Yang Liu, Hongning Wang, Min Gao, Jiheng
Zhang, Ruocheng Guo
- Abstract summary: Confounding bias arises due to the presence of unmeasured variables that can affect both a user's exposure and feedback.
Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure.
We propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables to resolve the aforementioned non-identification issue.
- Score: 49.16119112336605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommendation systems aim to predict users' feedback on items not exposed to
them.
Confounding bias arises due to the presence of unmeasured variables (e.g.,
the socio-economic status of a user) that can affect both a user's exposure and
feedback. Existing methods either (1) make untenable assumptions about these
unmeasured variables or (2) directly infer latent confounders from users'
exposure. However, they cannot guarantee the identification of counterfactual
feedback, which can lead to biased predictions. In this work, we propose a
novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of
proxy variables (e.g., observed user features) to resolve the aforementioned
non-identification issue. The proposed iDCF is a general deconfounded
recommendation framework that applies proximal causal inference to infer the
unmeasured confounders and identify the counterfactual feedback with
theoretical guarantees. Extensive experiments on various real-world and
synthetic datasets verify the proposed method's effectiveness and robustness.
Related papers
- Confidence Aware Learning for Reliable Face Anti-spoofing [52.23271636362843]
We propose a Confidence Aware Face Anti-spoofing model, which is aware of its capability boundary.
We estimate its confidence during the prediction of each sample.
Experiments show that the proposed CA-FAS can effectively recognize samples with low prediction confidence.
arXiv Detail & Related papers (2024-11-02T14:29:02Z) - Accounting for Sycophancy in Language Model Uncertainty Estimation [28.08509288774144]
We study the relationship between sycophancy and uncertainty estimation for the first time.
We show that user confidence plays a critical role in modulating the effects of sycophancy.
We argue that externalizing both model and user uncertainty can help to mitigate the impacts of sycophancy bias.
arXiv Detail & Related papers (2024-10-17T18:00:25Z) - Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Efficient Conformal Prediction under Data Heterogeneity [79.35418041861327]
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification.
Existing approaches for tackling non-exchangeability lead to methods that are not computable beyond the simplest examples.
This work introduces a new efficient approach to CP that produces provably valid confidence sets for fairly general non-exchangeable data distributions.
arXiv Detail & Related papers (2023-12-25T20:02:51Z) - Separating and Learning Latent Confounders to Enhancing User Preferences Modeling [6.0853798070913845]
We propose a novel framework, Separating and Learning Latent Confounders For Recommendation (SLFR)
SLFR obtains the representation of unmeasured confounders to identify the counterfactual feedback by disentangling user preferences and unmeasured confounders.
Experiments in five real-world datasets validate the advantages of our method.
arXiv Detail & Related papers (2023-11-02T08:42:50Z) - Uncertainty-Aware Instance Reweighting for Off-Policy Learning [63.31923483172859]
We propose a Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning.
Experiment results on synthetic and three real-world recommendation datasets demonstrate the advantageous sample efficiency of the proposed UIPS estimator.
arXiv Detail & Related papers (2023-03-11T11:42:26Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - Adversarial Counterfactual Learning and Evaluation for Recommender
System [33.44276155380476]
We show in theory that applying supervised learning to detect user preferences may end up with inconsistent results in the absence of exposure information.
We propose a principled solution by introducing a minimax empirical risk formulation.
arXiv Detail & Related papers (2020-11-08T00:40:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.