Towards Computing an Optimal Abstraction for Structural Causal Models
- URL: http://arxiv.org/abs/2208.00894v1
- Date: Mon, 1 Aug 2022 14:35:57 GMT
- Title: Towards Computing an Optimal Abstraction for Structural Causal Models
- Authors: Fabio Massimo Zennaro, Paolo Turrini, Theodoros Damoulas
- Abstract summary: We focus on the problem of learning abstractions.
We suggest a concrete measure of information loss, and we illustrate its contribution to learning new abstractions.
- Score: 16.17846886492361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Working with causal models at different levels of abstraction is an important
feature of science. Existing work has already considered the problem of
expressing formally the relation of abstraction between causal models. In this
paper, we focus on the problem of learning abstractions. We start by defining
the learning problem formally in terms of the optimization of a standard
measure of consistency. We then point out the limitation of this approach, and
we suggest extending the objective function with a term accounting for
information loss. We suggest a concrete measure of information loss, and we
illustrate its contribution to learning new abstractions.
Related papers
- Causal Abstraction in Model Interpretability: A Compact Survey [5.963324728136442]
causal abstraction provides a principled approach to understanding and explaining the causal mechanisms underlying model behavior.
This survey paper delves into the realm of causal abstraction, examining its theoretical foundations, practical applications, and implications for the field of model interpretability.
arXiv Detail & Related papers (2024-10-26T12:24:28Z) - Abstraction Alignment: Comparing Model and Human Conceptual Relationships [26.503178592074757]
We introduce abstraction alignment, a methodology to measure the agreement between a model's learned abstraction and the expected human abstraction.
In evaluation tasks, abstraction alignment provides a deeper understanding of model behavior and dataset content.
arXiv Detail & Related papers (2024-07-17T13:27:26Z) - Learning Causal Abstractions of Linear Structural Causal Models [18.132607344833925]
Causal Abstraction provides a framework for formalizing two Structural Causal Models at different levels of detail.
We tackle both issues for linear causal models with linear abstraction functions.
In particular, we introduce Abs-LiNGAM, a method that leverages the constraints induced by the learned high-level model and the abstraction function to speedup the recovery of the larger low-level model.
arXiv Detail & Related papers (2024-06-01T10:42:52Z) - Building Minimal and Reusable Causal State Abstractions for
Reinforcement Learning [63.58935783293342]
Causal Bisimulation Modeling (CBM) is a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction.
CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones.
arXiv Detail & Related papers (2024-01-23T05:43:15Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Does Deep Learning Learn to Abstract? A Systematic Probing Framework [69.2366890742283]
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
We introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective.
arXiv Detail & Related papers (2023-02-23T12:50:02Z) - Causal Dynamics Learning for Task-Independent State Abstraction [61.707048209272884]
We introduce Causal Dynamics Learning for Task-Independent State Abstraction (CDL)
CDL learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action.
A state abstraction can then be derived from the learned dynamics.
arXiv Detail & Related papers (2022-06-27T17:02:53Z) - Towards a Mathematical Theory of Abstraction [0.0]
We provide a precise characterisation of what an abstraction is and, perhaps more importantly, suggest how abstractions can be learnt directly from data.
Our results have deep implications for statistical inference and machine learning and could be used to develop explicit methods for learning precise kinds of abstractions directly from data.
arXiv Detail & Related papers (2021-06-03T13:23:49Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - Towards Interpretable Reasoning over Paragraph Effects in Situation [126.65672196760345]
We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect.
We propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules.
In particular, five reasoning modules are designed and learned in an end-to-end manner, which leads to a more interpretable model.
arXiv Detail & Related papers (2020-10-03T04:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.