Causal Abstraction Inference under Lossy Representations
- URL: http://arxiv.org/abs/2509.21607v1
- Date: Thu, 25 Sep 2025 21:20:42 GMT
- Title: Causal Abstraction Inference under Lossy Representations
- Authors: Kevin Xia, Elias Bareinboim,
- Abstract summary: We introduce a new type of abstractions called projected abstractions that generalize existing definitions to accommodate lossy representations.<n>We show how to construct a projected abstraction from the low-level model and how it translates equivalent observational, interventional, and counterfactual causal queries from low to high-level.
- Score: 53.18851962820361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study of causal abstractions bridges two integral components of human intelligence: the ability to determine cause and effect, and the ability to interpret complex patterns into abstract concepts. Formally, causal abstraction frameworks define connections between complicated low-level causal models and simple high-level ones. One major limitation of most existing definitions is that they are not well-defined when considering lossy abstraction functions in which multiple low-level interventions can have different effects while mapping to the same high-level intervention (an assumption called the abstract invariance condition). In this paper, we introduce a new type of abstractions called projected abstractions that generalize existing definitions to accommodate lossy representations. We show how to construct a projected abstraction from the low-level model and how it translates equivalent observational, interventional, and counterfactual causal queries from low to high-level. Given that the true model is rarely available in practice we prove a new graphical criteria for identifying and estimating high-level causal queries from limited low-level data. Finally, we experimentally show the effectiveness of projected abstraction models in high-dimensional image settings.
Related papers
- Causal and Compositional Abstraction [0.0]
We present a general account of abstractions between low and high level models as natural transformations.<n>This provides a new formalisation of causal abstraction, unifying several notions in the literature.<n>We show that abstraction can be generalised to further compositional models.
arXiv Detail & Related papers (2026-02-18T17:06:09Z) - Causal Abstractions, Categorically Unified [9.341830361844337]
We present a categorical framework for relating causal models that represent the same system at different levels of abstraction.<n>Our approach unifies and generalizes previously considered causal abstractions.<n>We discuss how methods from mechanistic interpretability, such as circuit analysis and sparse autoencoders, fit within our framework.
arXiv Detail & Related papers (2025-10-06T17:09:30Z) - Learning Causal Abstractions of Linear Structural Causal Models [18.132607344833925]
Causal Abstraction provides a framework for formalizing two Structural Causal Models at different levels of detail.
We tackle both issues for linear causal models with linear abstraction functions.
In particular, we introduce Abs-LiNGAM, a method that leverages the constraints induced by the learned high-level model and the abstraction function to speedup the recovery of the larger low-level model.
arXiv Detail & Related papers (2024-06-01T10:42:52Z) - How to Handle Sketch-Abstraction in Sketch-Based Image Retrieval? [120.49126407479717]
We propose a sketch-based image retrieval framework capable of handling sketch abstraction at varied levels.
For granularity-level abstraction understanding, we dictate that the retrieval model should not treat all abstraction-levels equally.
Our Acc.@q loss uniquely allows a sketch to narrow/broaden its focus in terms of how stringent the evaluation should be.
arXiv Detail & Related papers (2024-03-11T23:08:29Z) - Building Minimal and Reusable Causal State Abstractions for
Reinforcement Learning [63.58935783293342]
Causal Bisimulation Modeling (CBM) is a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction.
CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones.
arXiv Detail & Related papers (2024-01-23T05:43:15Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Finding Alignments Between Interpretable Causal Variables and
Distributed Neural Representations [62.65877150123775]
Causal abstraction is a promising theoretical framework for explainable artificial intelligence.
Existing causal abstraction methods require a brute-force search over alignments between the high-level model and the low-level one.
We present distributed alignment search (DAS), which overcomes these limitations.
arXiv Detail & Related papers (2023-03-05T00:57:49Z) - Does Deep Learning Learn to Abstract? A Systematic Probing Framework [69.2366890742283]
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
We introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective.
arXiv Detail & Related papers (2023-02-23T12:50:02Z) - Causal Abstraction: A Theoretical Foundation for Mechanistic Interpretability [30.76910454663951]
Causal abstraction provides a theoretical foundation for mechanistic interpretability.<n>Our contributions are generalizing the theory of causal abstraction from mechanism replacement to arbitrary mechanism transformation.
arXiv Detail & Related papers (2023-01-11T20:42:41Z) - Towards Computing an Optimal Abstraction for Structural Causal Models [16.17846886492361]
We focus on the problem of learning abstractions.
We suggest a concrete measure of information loss, and we illustrate its contribution to learning new abstractions.
arXiv Detail & Related papers (2022-08-01T14:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.