Neural Causal Abstractions
- URL: http://arxiv.org/abs/2401.02602v2
- Date: Fri, 23 Feb 2024 02:22:42 GMT
- Title: Neural Causal Abstractions
- Authors: Kevin Xia, Elias Bareinboim
- Abstract summary: We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
- Score: 63.21695740637627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The abilities of humans to understand the world in terms of cause and effect
relationships, as well as to compress information into abstract concepts, are
two hallmark features of human intelligence. These two topics have been studied
in tandem in the literature under the rubric of causal abstractions theory. In
practice, it remains an open problem how to best leverage abstraction theory in
real-world causal inference tasks, where the true mechanisms are unknown and
only limited data is available. In this paper, we develop a new family of
causal abstractions by clustering variables and their domains. This approach
refines and generalizes previous notions of abstractions to better accommodate
individual causal distributions that are spawned by Pearl's causal hierarchy.
We show that such abstractions are learnable in practical settings through
Neural Causal Models (Xia et al., 2021), enabling the use of the deep learning
toolkit to solve various challenging causal inference tasks -- identification,
estimation, sampling -- at different levels of granularity. Finally, we
integrate these results with representation learning to create more flexible
abstractions, moving these results closer to practical applications. Our
experiments support the theory and illustrate how to scale causal inferences to
high-dimensional settings involving image data.
Related papers
- Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Learning Causal Abstractions of Linear Structural Causal Models [18.132607344833925]
Causal Abstraction provides a framework for formalizing two Structural Causal Models at different levels of detail.
We tackle both issues for linear causal models with linear abstraction functions.
In particular, we introduce Abs-LiNGAM, a method that leverages the constraints induced by the learned high-level model and the abstraction function to speedup the recovery of the larger low-level model.
arXiv Detail & Related papers (2024-06-01T10:42:52Z) - Skews in the Phenomenon Space Hinder Generalization in Text-to-Image Generation [59.138470433237615]
We introduce statistical metrics that quantify both the linguistic and visual skew of a dataset for relational learning.
We show that systematically controlled metrics are strongly predictive of generalization performance.
This work informs an important direction towards quality-enhancing the data diversity or balance to scaling up the absolute size.
arXiv Detail & Related papers (2024-03-25T03:18:39Z) - Emergence and Causality in Complex Systems: A Survey on Causal Emergence
and Related Quantitative Studies [12.78006421209864]
Causal emergence theory employs measures of causality to quantify emergence.
Two key problems are addressed: quantifying causal emergence and identifying it in data.
We highlighted that the architectures used for identifying causal emergence are shared by causal representation learning, causal model abstraction, and world model-based reinforcement learning.
arXiv Detail & Related papers (2023-12-28T04:20:46Z) - Does Deep Learning Learn to Abstract? A Systematic Probing Framework [69.2366890742283]
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
We introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective.
arXiv Detail & Related papers (2023-02-23T12:50:02Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - Towards Computing an Optimal Abstraction for Structural Causal Models [16.17846886492361]
We focus on the problem of learning abstractions.
We suggest a concrete measure of information loss, and we illustrate its contribution to learning new abstractions.
arXiv Detail & Related papers (2022-08-01T14:35:57Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - A Theory of Abstraction in Reinforcement Learning [18.976500531441346]
In this dissertation, I present a theory of abstraction in reinforcement learning.
I first offer three desiderata for functions that carry out the process of abstraction.
I then present a suite of new algorithms and analysis that clarify how agents can learn to abstract according to these desiderata.
arXiv Detail & Related papers (2022-03-01T12:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.