Causality-aware counterfactual confounding adjustment as an alternative
to linear residualization in anticausal prediction tasks based on linear
learners
- URL: http://arxiv.org/abs/2011.04605v1
- Date: Mon, 9 Nov 2020 17:59:57 GMT
- Title: Causality-aware counterfactual confounding adjustment as an alternative
to linear residualization in anticausal prediction tasks based on linear
learners
- Authors: Elias Chaibub Neto
- Abstract summary: We compare the linear residualization approach against the causality-aware confounding adjustment in anticausal prediction tasks.
We show that the causality-aware approach tends to (asymptotically) outperform the residualization adjustment in terms of predictive performance in linear learners.
- Score: 14.554818659491644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear residualization is a common practice for confounding adjustment in
machine learning (ML) applications. Recently, causality-aware predictive
modeling has been proposed as an alternative causality-inspired approach for
adjusting for confounders. The basic idea is to simulate counterfactual data
that is free from the spurious associations generated by the observed
confounders. In this paper, we compare the linear residualization approach
against the causality-aware confounding adjustment in anticausal prediction
tasks, and show that the causality-aware approach tends to (asymptotically)
outperform the residualization adjustment in terms of predictive performance in
linear learners. Importantly, our results still holds even when the true model
is not linear. We illustrate our results in both regression and classification
tasks, where we compared the causality-aware and residualization approaches
using mean squared errors and classification accuracy in synthetic data
experiments where the linear regression model is mispecified, as well as, when
the linear model is correctly specified. Furthermore, we illustrate how the
causality-aware approach is more stable than residualization with respect to
dataset shifts in the joint distribution of the confounders and outcome
variables.
Related papers
- Bayesian Inference for Consistent Predictions in Overparameterized Nonlinear Regression [0.0]
This study explores the predictive properties of over parameterized nonlinear regression within the Bayesian framework.
Posterior contraction is established for generalized linear and single-neuron models with Lipschitz continuous activation functions.
The proposed method was validated via numerical simulations and a real data application.
arXiv Detail & Related papers (2024-04-06T04:22:48Z) - Adaptive Optimization for Prediction with Missing Data [6.800113478497425]
We show that some adaptive linear regression models are equivalent to learning an imputation rule and a downstream linear regression model simultaneously.
In settings where data is strongly not missing at random, our methods achieve a 2-10% improvement in out-of-sample accuracy.
arXiv Detail & Related papers (2024-02-02T16:35:51Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Variation-Incentive Loss Re-weighting for Regression Analysis on Biased
Data [8.115323786541078]
We aim to improve the accuracy of the regression analysis by addressing the data skewness/bias during model training.
We propose a Variation-Incentive Loss re-weighting method (VILoss) to optimize the gradient descent-based model training for regression analysis.
arXiv Detail & Related papers (2021-09-14T10:22:21Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z) - Semi-Supervised Empirical Risk Minimization: Using unlabeled data to
improve prediction [4.860671253873579]
We present a general methodology for using unlabeled data to design semi supervised learning (SSL) variants of the Empirical Risk Minimization (ERM) learning process.
We analyze of the effectiveness of our SSL approach in improving prediction performance.
arXiv Detail & Related papers (2020-09-01T17:55:51Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - A Locally Adaptive Interpretable Regression [7.4267694612331905]
Linear regression is one of the most interpretable prediction models.
In this work, we introduce a locally adaptive interpretable regression (LoAIR)
Our model achieves comparable or better predictive performance than the other state-of-the-art baselines.
arXiv Detail & Related papers (2020-05-07T09:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.