Abstraction between Structural Causal Models: A Review of Definitions
and Properties
- URL: http://arxiv.org/abs/2207.08603v1
- Date: Mon, 18 Jul 2022 13:47:20 GMT
- Title: Abstraction between Structural Causal Models: A Review of Definitions
and Properties
- Authors: Fabio Massimo Zennaro
- Abstract summary: Structural causal models (SCMs) are a widespread formalism to deal with causal systems.
This paper focuses on the formal properties of a map between SCMs, and highlighting the different layers (structural, distributional) at which these properties may be enforced.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structural causal models (SCMs) are a widespread formalism to deal with
causal systems. A recent direction of research has considered the problem of
relating formally SCMs at different levels of abstraction, by defining maps
between SCMs and imposing a requirement of interventional consistency. This
paper offers a review of the solutions proposed so far, focusing on the formal
properties of a map between SCMs, and highlighting the different layers
(structural, distributional) at which these properties may be enforced. This
allows us to distinguish families of abstractions that may or may not be
permitted by choosing to guarantee certain properties instead of others. Such
an understanding not only allows to distinguish among proposal for causal
abstraction with more awareness, but it also allows to tailor the definition of
abstraction with respect to the forms of abstraction relevant to specific
applications.
Related papers
- Athanor: Local Search over Abstract Constraint Specifications [2.3383199519492455]
We focus on general-purpose local search solvers that accept as input a constraint model.
The Athanor solver we describe herein differs in that it begins from a specification of a problem in the abstract constraint specification language Essence.
arXiv Detail & Related papers (2024-10-08T11:41:38Z) - Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Quantifying Consistency and Information Loss for Causal Abstraction
Learning [16.17846886492361]
We introduce a family of interventional measures that an agent may use to evaluate such a trade-off.
We consider four measures suited for different tasks, analyze their properties, and propose algorithms to evaluate and learn causal abstractions.
arXiv Detail & Related papers (2023-05-07T19:10:28Z) - Finding Alignments Between Interpretable Causal Variables and
Distributed Neural Representations [62.65877150123775]
Causal abstraction is a promising theoretical framework for explainable artificial intelligence.
Existing causal abstraction methods require a brute-force search over alignments between the high-level model and the low-level one.
We present distributed alignment search (DAS), which overcomes these limitations.
arXiv Detail & Related papers (2023-03-05T00:57:49Z) - Jointly Learning Consistent Causal Abstractions Over Multiple
Interventional Distributions [8.767175335575386]
An abstraction can be used to relate two structural causal models representing the same system at different levels of resolution.
We introduce a first framework for causal abstraction learning between SCMs based on the formalization of abstraction recently proposed by Rischel.
arXiv Detail & Related papers (2023-01-14T11:22:16Z) - Causal Abstraction with Soft Interventions [15.143508016472184]
Causal abstraction provides a theory describing how several causal models can represent the same system at different levels of detail.
We extend causal abstraction to "soft" interventions, which assign possibly non-constant functions to variables without adding new causal connections.
arXiv Detail & Related papers (2022-11-22T13:42:43Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Invariant Causal Prediction for Block MDPs [106.63346115341862]
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
We propose a method of invariant prediction to learn model-irrelevance state abstractions (MISA) that generalize to novel observations in the multi-environment setting.
arXiv Detail & Related papers (2020-03-12T21:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.