A Topological Perspective on Causal Inference
- URL: http://arxiv.org/abs/2107.08558v1
- Date: Sun, 18 Jul 2021 23:09:03 GMT
- Title: A Topological Perspective on Causal Inference
- Authors: Duligur Ibeling, Thomas Icard
- Abstract summary: We show that substantive assumption-free causal inference is possible only in a meager set of structural causal models.
Our results show that inductive assumptions sufficient to license valid causal inferences are statistically unverifiable in principle.
An additional benefit of our topological approach is that it easily accommodates SCMs with infinitely many variables.
- Score: 10.965065178451104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a topological learning-theoretic perspective on causal
inference by introducing a series of topologies defined on general spaces of
structural causal models (SCMs). As an illustration of the framework we prove a
topological causal hierarchy theorem, showing that substantive assumption-free
causal inference is possible only in a meager set of SCMs. Thanks to a known
correspondence between open sets in the weak topology and statistically
verifiable hypotheses, our results show that inductive assumptions sufficient
to license valid causal inferences are statistically unverifiable in principle.
Similar to no-free-lunch theorems for statistical inference, the present
results clarify the inevitability of substantial assumptions for causal
inference. An additional benefit of our topological approach is that it easily
accommodates SCMs with infinitely many variables. We finally suggest that the
framework may be helpful for the positive project of exploring and assessing
alternative causal-inductive assumptions.
Related papers
- Measurability in the Fundamental Theorem of Statistical Learning [0.0]
The Fundamental Theorem of Statistical Learning states that a hypothesis space is PAC learnable if and only if its VC dimension is finite.
This paper presents sufficient conditions for the PAC learnability of hypothesis spaces defined over o-minimal expansions of the reals.
arXiv Detail & Related papers (2024-10-14T08:03:06Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Relating Wigner's Friend Scenarios to Nonclassical Causal Compatibility, Monogamy Relations, and Fine Tuning [0.7421845364041001]
We show that the LF no-go theorem poses formidable challenges for the field of causal modeling.
We prove that no nonclassical causal model can explain violations of LF inequalities without violating the No Fine-Tuning principle.
arXiv Detail & Related papers (2023-09-22T16:32:39Z) - Answering Causal Queries at Layer 3 with DiscoSCMs-Embracing
Heterogeneity [0.0]
This paper advocates for the Distribution-consistency Structural Causal Models (DiscoSCM) framework as a pioneering approach to counterfactual inference.
arXiv Detail & Related papers (2023-09-17T17:01:05Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Learning a Structural Causal Model for Intuition Reasoning in
Conversation [20.243323155177766]
Reasoning, a crucial aspect of NLP research, has not been adequately addressed by prevailing models.
We develop a conversation cognitive model ( CCM) that explains how each utterance receives and activates channels of information.
By leveraging variational inference, it explores substitutes for implicit causes, addresses the issue of their unobservability, and reconstructs the causal representations of utterances through the evidence lower bounds.
arXiv Detail & Related papers (2023-05-28T13:54:09Z) - A Measure-Theoretic Axiomatisation of Causality [55.6970314129444]
We argue in favour of taking Kolmogorov's measure-theoretic axiomatisation of probability as the starting point towards an axiomatisation of causality.
Our proposed framework is rigorously grounded in measure theory, but it also sheds light on long-standing limitations of existing frameworks.
arXiv Detail & Related papers (2023-05-19T13:15:48Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Learning Causal Semantic Representation for Out-of-Distribution
Prediction [125.38836464226092]
We propose a Causal Semantic Generative model (CSG) based on a causal reasoning so that the two factors are modeled separately.
We show that CSG can identify the semantic factor by fitting training data, and this semantic-identification guarantees the boundedness of OOD generalization error.
arXiv Detail & Related papers (2020-11-03T13:16:05Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - A structure theorem for generalized-noncontextual ontological models [0.0]
We use a process-theoretic framework to prove that every generalized-noncontextual ontological model of a tomographically local operational theory has a surprisingly rigid and simple mathematical structure.
We extend known results concerning the equivalence of different notions of classicality from prepare-measure scenarios to arbitrary compositional scenarios.
arXiv Detail & Related papers (2020-05-14T17:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.