Uncovering Bias Mechanisms in Observational Studies
- URL: http://arxiv.org/abs/2506.01191v1
- Date: Sun, 01 Jun 2025 21:58:09 GMT
- Title: Uncovering Bias Mechanisms in Observational Studies
- Authors: Ilker Demirel, Zeshan Hussain, Piersilvio De Bartolomeis, David Sontag,
- Abstract summary: We show that the relationship between bias magnitude and the predictive performance of nuisance function estimators can help distinguish among common sources of causal bias.<n>Our framework offers a new lens for understanding and characterizing bias in observational studies, with practical implications for improving causal inference.
- Score: 6.085935341047458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Observational studies are a key resource for causal inference but are often affected by systematic biases. Prior work has focused mainly on detecting these biases, via sensitivity analyses and comparisons with randomized controlled trials, or mitigating them through debiasing techniques. However, there remains a lack of methodology for uncovering the underlying mechanisms driving these biases, e.g., whether due to hidden confounding or selection of participants. In this work, we show that the relationship between bias magnitude and the predictive performance of nuisance function estimators (in the observational study) can help distinguish among common sources of causal bias. We validate our methodology through extensive synthetic experiments and a real-world case study, demonstrating its effectiveness in revealing the mechanisms behind observed biases. Our framework offers a new lens for understanding and characterizing bias in observational studies, with practical implications for improving causal inference.
Related papers
- When Selection Meets Intervention: Additional Complexities in Causal Discovery [16.629408366459575]
We address the common yet often-overlooked selection bias in interventional studies, where subjects are selectively enrolled into experiments.<n>We introduce a graphical model that explicitly accounts for both the observed world (where interventions are applied) and the counterfactual world (where selection occurs while interventions have not been applied)<n>We propose a provably sound algorithm to identify causal relations as well as selection mechanisms up to the equivalence class.
arXiv Detail & Related papers (2025-03-10T13:22:38Z) - How far can bias go? -- Tracing bias from pretraining data to alignment [54.51310112013655]
This study examines the correlation between gender-occupation bias in pre-training data and their manifestation in LLMs.<n>Our findings reveal that biases present in pre-training data are amplified in model outputs.
arXiv Detail & Related papers (2024-11-28T16:20:25Z) - Learning sources of variability from high-dimensional observational
studies [41.06757602546625]
Causal inference studies whether the presence of a variable influences an observed outcome.
Our work generalizes causal estimands to outcomes with any number of dimensions or any measurable space.
We propose a simple technique for adjusting universally consistent conditional independence tests.
arXiv Detail & Related papers (2023-07-26T00:01:16Z) - A Double Machine Learning Approach to Combining Experimental and Observational Data [59.29868677652324]
We propose a double machine learning approach to combine experimental and observational studies.
Our framework tests for violations of external validity and ignorability under milder assumptions.
arXiv Detail & Related papers (2023-07-04T02:53:11Z) - Falsification of Internal and External Validity in Observational Studies
via Conditional Moment Restrictions [6.9347431938654465]
Given data from both an RCT and an observational study, assumptions on internal and external validity have an observable, testable implication.
We show that expressing these CMRs with respect to the causal effect, or "causal contrast", as opposed to individual counterfactual means, provides a more reliable falsification test.
arXiv Detail & Related papers (2023-01-30T18:16:16Z) - Unsupervised Learning of Unbiased Visual Representations [12.690228982893]
Deep neural networks often struggle to learn robust representations in the presence of dataset biases.<n>Existing approaches to address this problem typically involve explicit supervision of bias attributes or reliance on prior knowledge about the biases.<n>We present a fully unsupervised debiasing framework with three key steps.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - Epistemic Uncertainty-Weighted Loss for Visual Bias Mitigation [6.85474615630103]
We argue the relevance of exploring methods which are completely ignorant of the presence of any bias.
We propose using Bayesian neural networks with a predictive uncertainty-weighted loss function to identify potential bias.
We show the method has potential to mitigate visual bias on a bias benchmark dataset and on a real-world face detection problem.
arXiv Detail & Related papers (2022-04-20T11:01:51Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - An introduction to causal reasoning in health analytics [2.199093822766999]
We will try to highlight some of the drawbacks that may arise in traditional machine learning and statistical approaches to analyze the observational data.
We will demonstrate the applications of causal inference in tackling some common machine learning issues.
arXiv Detail & Related papers (2021-05-10T20:25:56Z) - ACRE: Abstract Causal REasoning Beyond Covariation [90.99059920286484]
We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
arXiv Detail & Related papers (2021-03-26T02:42:38Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.