Causal Abstraction with Soft Interventions
- URL: http://arxiv.org/abs/2211.12270v1
- Date: Tue, 22 Nov 2022 13:42:43 GMT
- Title: Causal Abstraction with Soft Interventions
- Authors: Riccardo Massidda, Atticus Geiger, Thomas Icard, Davide Bacciu
- Abstract summary: Causal abstraction provides a theory describing how several causal models can represent the same system at different levels of detail.
We extend causal abstraction to "soft" interventions, which assign possibly non-constant functions to variables without adding new causal connections.
- Score: 15.143508016472184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal abstraction provides a theory describing how several causal models can
represent the same system at different levels of detail. Existing theoretical
proposals limit the analysis of abstract models to "hard" interventions fixing
causal variables to be constant values. In this work, we extend causal
abstraction to "soft" interventions, which assign possibly non-constant
functions to variables without adding new causal connections. Specifically, (i)
we generalize $\tau$-abstraction from Beckers and Halpern (2019) to soft
interventions, (ii) we propose a further definition of soft abstraction to
ensure a unique map $\omega$ between soft interventions, and (iii) we prove
that our constructive definition of soft abstraction guarantees the
intervention map $\omega$ has a specific and necessary explicit form.
Related papers
- Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Quantifying Consistency and Information Loss for Causal Abstraction
Learning [16.17846886492361]
We introduce a family of interventional measures that an agent may use to evaluate such a trade-off.
We consider four measures suited for different tasks, analyze their properties, and propose algorithms to evaluate and learn causal abstractions.
arXiv Detail & Related papers (2023-05-07T19:10:28Z) - A Robustness Analysis of Blind Source Separation [91.3755431537592]
Blind source separation (BSS) aims to recover an unobserved signal from its mixture $X=f(S)$ under the condition that the transformation $f$ is invertible but unknown.
We present a general framework for analysing such violations and quantifying their impact on the blind recovery of $S$ from $X$.
We show that a generic BSS-solution in response to general deviations from its defining structural assumptions can be profitably analysed in the form of explicit continuity guarantees.
arXiv Detail & Related papers (2023-03-17T16:30:51Z) - Towards Computing an Optimal Abstraction for Structural Causal Models [16.17846886492361]
We focus on the problem of learning abstractions.
We suggest a concrete measure of information loss, and we illustrate its contribution to learning new abstractions.
arXiv Detail & Related papers (2022-08-01T14:35:57Z) - Abstraction between Structural Causal Models: A Review of Definitions
and Properties [0.0]
Structural causal models (SCMs) are a widespread formalism to deal with causal systems.
This paper focuses on the formal properties of a map between SCMs, and highlighting the different layers (structural, distributional) at which these properties may be enforced.
arXiv Detail & Related papers (2022-07-18T13:47:20Z) - Towards a Grounded Theory of Causation for Embodied AI [12.259552039796027]
Existing frameworks give no indication as to which behaviour policies or physical transformations of state space shall count as interventions.
The framework sketched in this paper describes actions as transformations of state space, for instance induced by an agent running a policy.
This makes it possible to describe in a uniform way both transformations of the micro-state space and abstract models thereof.
arXiv Detail & Related papers (2022-06-28T12:56:43Z) - Diffusion models as plug-and-play priors [98.16404662526101]
We consider the problem of inferring high-dimensional data $mathbfx$ in a model that consists of a prior $p(mathbfx)$ and an auxiliary constraint $c(mathbfx,mathbfy)$.
The structure of diffusion models allows us to perform approximate inference by iterating differentiation through the fixed denoising network enriched with different amounts of noise.
arXiv Detail & Related papers (2022-06-17T21:11:36Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - Constrained Abstractive Summarization: Preserving Factual Consistency
with Constrained Generation [93.87095877617968]
We propose Constrained Abstractive Summarization (CAS), a general setup that preserves the factual consistency of abstractive summarization.
We adopt lexically constrained decoding, a technique generally applicable to autoregressive generative models, to fulfill CAS.
We observe up to 13.8 ROUGE-2 gains when only one manual constraint is used in interactive summarization.
arXiv Detail & Related papers (2020-10-24T00:27:44Z) - Efficient Intervention Design for Causal Discovery with Latents [30.721629140295178]
We consider recovering a causal graph in presence of latent variables, where we seek to minimize the cost of interventions used in the recovery process.
We consider two intervention cost models: (1) a linear cost model where the cost of an intervention on a subset of variables has a linear form, and (2) an identity cost model where the cost of an intervention is the same, regardless of what variables it is on.
arXiv Detail & Related papers (2020-05-24T12:53:48Z) - Invariant Causal Prediction for Block MDPs [106.63346115341862]
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
We propose a method of invariant prediction to learn model-irrelevance state abstractions (MISA) that generalize to novel observations in the multi-environment setting.
arXiv Detail & Related papers (2020-03-12T21:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.