A Consistent Estimator for Confounding Strength
- URL: http://arxiv.org/abs/2211.01903v1
- Date: Thu, 3 Nov 2022 15:34:33 GMT
- Title: A Consistent Estimator for Confounding Strength
- Authors: Luca Rendsburg, Leena Chennuru Vankadara, Debarghya Ghoshdastidar,
Ulrike von Luxburg
- Abstract summary: We derive the behavior of the confounding strength estimator by Janzing and Sch"olkopf.
We then use tools from random matrix theory to derive an adapted, consistent estimator.
- Score: 21.443297599122058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regression on observational data can fail to capture a causal relationship in
the presence of unobserved confounding. Confounding strength measures this
mismatch, but estimating it requires itself additional assumptions. A common
assumption is the independence of causal mechanisms, which relies on
concentration phenomena in high dimensions. While high dimensions enable the
estimation of confounding strength, they also necessitate adapted estimators.
In this paper, we derive the asymptotic behavior of the confounding strength
estimator by Janzing and Sch\"olkopf (2018) and show that it is generally not
consistent. We then use tools from random matrix theory to derive an adapted,
consistent estimator.
Related papers
- On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss [3.8248583585487155]
Evidential uncertainty is produced by Deep Ensembles, Bayesian Deep Networks, or Evidential Deep Networks.
Although measurable, this form of uncertainty is difficult to calibrate on an objective basis.
We propose a regularization function for deep ensembles, called conflictual loss in line with the above requirements.
arXiv Detail & Related papers (2024-07-16T23:21:28Z) - Seeing is not Believing: Robust Reinforcement Learning against Spurious
Correlation [57.351098530477124]
We consider one critical type of robustness against spurious correlation, where different portions of the state do not have correlations induced by unobserved confounders.
A model that learns such useless or even harmful correlation could catastrophically fail when the confounder in the test case deviates from the training one.
Existing robust algorithms that assume simple and unstructured uncertainty sets are therefore inadequate to address this challenge.
arXiv Detail & Related papers (2023-07-15T23:53:37Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - On Feature Decorrelation in Self-Supervised Learning [15.555208840500086]
We study a framework containing the most common components from recent approaches.
We connect dimensional collapse with strong correlations between axes and consider such connection as a strong motivation for feature decorrelation.
arXiv Detail & Related papers (2021-05-02T13:28:18Z) - Don't Just Blame Over-parametrization for Over-confidence: Theoretical
Analysis of Calibration in Binary Classification [58.03725169462616]
We show theoretically that over-parametrization is not the only reason for over-confidence.
We prove that logistic regression is inherently over-confident, in the realizable, under-parametrized setting.
Perhaps surprisingly, we also show that over-confidence is not always the case.
arXiv Detail & Related papers (2021-02-15T21:38:09Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Improving Nonparametric Density Estimation with Tensor Decompositions [14.917420021212912]
Nonparametric density estimators often perform well on low dimensional data, but suffer when applied to higher dimensional data.
This paper investigates whether these improvements can be extended to other simplified dependence assumptions.
We prove that restricting estimation to low-rank nonnegative PARAFAC or Tucker decompositions removes the dimensionality exponent on bin width rates for multidimensional histograms.
arXiv Detail & Related papers (2020-10-06T01:39:09Z) - A Universal Formulation of Uncertainty Relation for Error and
Disturbance [0.9479435599284545]
We present a universal formulation of uncertainty relation valid for any conceivable quantum measurement.
Owing to its simplicity and operational tangibility, our general relation is also experimentally verifiable.
arXiv Detail & Related papers (2020-04-13T17:57:41Z) - Geometric Formulation of Universally Valid Uncertainty Relation for
Error [1.696974372855528]
We present a new geometric formulation of uncertainty relation valid for any quantum measurements of statistical nature.
Owing to its simplicity and tangibility, our relation is universally valid and experimentally viable.
arXiv Detail & Related papers (2020-02-10T18:31:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.