Active and Passive Causal Inference Learning
- URL: http://arxiv.org/abs/2308.09248v1
- Date: Fri, 18 Aug 2023 02:23:48 GMT
- Title: Active and Passive Causal Inference Learning
- Authors: Daniel Jiwoong Im, Kyunghyun Cho
- Abstract summary: This paper serves as a starting point for machine learning researchers, engineers and students who are interested in causal inference.
We start by laying out an important set of assumptions that are collectively needed for causal identification.
We build out a set of important causal inference techniques, which we do so by categorizing them into two buckets; active and passive approaches.
- Score: 51.91564516458894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper serves as a starting point for machine learning researchers,
engineers and students who are interested in but not yet familiar with causal
inference. We start by laying out an important set of assumptions that are
collectively needed for causal identification, such as exchangeability,
positivity, consistency and the absence of interference. From these
assumptions, we build out a set of important causal inference techniques, which
we do so by categorizing them into two buckets; active and passive approaches.
We describe and discuss randomized controlled trials and bandit-based
approaches from the active category. We then describe classical approaches,
such as matching and inverse probability weighting, in the passive category,
followed by more recent deep learning based algorithms. By finishing the paper
with some of the missing aspects of causal inference from this paper, such as
collider biases, we expect this paper to provide readers with a diverse set of
starting points for further reading and research in causal inference and
discovery.
Related papers
- "Why Should I Review This Paper?" Unifying Semantic, Topic, and Citation
Factors for Paper-Reviewer Matching [31.658757187200603]
We propose a unified model for paper-reviewer matching that jointly captures semantic, topic, and citation factors.
Experiments on four datasets consistently validate our proposed UniPR model in comparison with state-of-the-art paper-reviewer matching methods.
arXiv Detail & Related papers (2023-10-23T01:29:18Z) - Causal Discovery and Prediction: Methods and Algorithms [0.0]
In this thesis we introduce a generic a-priori assessment of each possible intervention.
We propose an active learning algorithm that identifies the causal relations in any given causal model.
arXiv Detail & Related papers (2023-09-18T01:19:37Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Causality, Causal Discovery, and Causal Inference in Structural
Engineering [1.827510863075184]
This paper builds a case for causal discovery and causal inference from a civil and structural engineering perspective.
More specifically, this paper outlines the key principles of causality and the most commonly used algorithms and packages for causal discovery and causal inference.
arXiv Detail & Related papers (2022-04-04T14:49:47Z) - The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set
Methods [86.39044549664189]
Anomaly detection algorithms for feature-vector data identify anomalies as outliers, but outlier detection has not worked well in deep learning.
This paper proposes the Familiarity Hypothesis that these methods succeed because they are detecting the absence of familiar learned features rather than the presence of novelty.
The paper concludes with a discussion of whether familiarity detection is an inevitable consequence of representation learning.
arXiv Detail & Related papers (2022-03-04T18:32:58Z) - Ensemble-based Uncertainty Quantification: Bayesian versus Credal
Inference [0.0]
We consider ensemble-based approaches to uncertainty quantification.
We specifically focus on Bayesian methods and approaches based on so-called credal sets.
The effectiveness of corresponding measures is evaluated and compared in an empirical study on classification with a reject option.
arXiv Detail & Related papers (2021-07-21T22:47:24Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.