Jointly Learning Consistent Causal Abstractions Over Multiple
Interventional Distributions
- URL: http://arxiv.org/abs/2301.05893v2
- Date: Sun, 7 May 2023 19:10:47 GMT
- Title: Jointly Learning Consistent Causal Abstractions Over Multiple
Interventional Distributions
- Authors: Fabio Massimo Zennaro, M\'at\'e Dr\'avucz, Geanina Apachitei, W.
Dhammika Widanage, Theodoros Damoulas
- Abstract summary: An abstraction can be used to relate two structural causal models representing the same system at different levels of resolution.
We introduce a first framework for causal abstraction learning between SCMs based on the formalization of abstraction recently proposed by Rischel.
- Score: 8.767175335575386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An abstraction can be used to relate two structural causal models
representing the same system at different levels of resolution. Learning
abstractions which guarantee consistency with respect to interventional
distributions would allow one to jointly reason about evidence across multiple
levels of granularity while respecting the underlying cause-effect
relationships. In this paper, we introduce a first framework for causal
abstraction learning between SCMs based on the formalization of abstraction
recently proposed by Rischel (2020). Based on that, we propose a differentiable
programming solution that jointly solves a number of combinatorial
sub-problems, and we study its performance and benefits against independent and
sequential approaches on synthetic settings and on a challenging real-world
problem related to electric vehicle battery manufacturing.
Related papers
- Independence Constrained Disentangled Representation Learning from Epistemological Perspective [13.51102815877287]
Disentangled Representation Learning aims to improve the explainability of deep learning methods by training a data encoder that identifies semantically meaningful latent variables in the data generation process.
There is no consensus regarding the objective of disentangled representation learning.
We propose a novel method for disentangled representation learning by employing an integration of mutual information constraint and independence constraint.
arXiv Detail & Related papers (2024-09-04T13:00:59Z) - InterHandGen: Two-Hand Interaction Generation via Cascaded Reverse Diffusion [53.90516061351706]
We present InterHandGen, a novel framework that learns the generative prior of two-hand interaction.
For sampling, we combine anti-penetration and synthesis-free guidance to enable plausible generation.
Our method significantly outperforms baseline generative models in terms of plausibility and diversity.
arXiv Detail & Related papers (2024-03-26T06:35:55Z) - Causal Optimal Transport of Abstractions [8.642152250082368]
Causal abstraction (CA) theory establishes formal criteria for relating multiple structural causal models (SCMs) at different levels of granularity.
We propose COTA, the first method to learn abstraction maps from observational and interventional data without assuming complete knowledge of the underlying SCMs.
We extensively evaluate COTA on synthetic and real world problems, and showcase its advantages over non-causal, independent and aggregated COTA formulations.
arXiv Detail & Related papers (2023-12-13T12:54:34Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Similarity-based Memory Enhanced Joint Entity and Relation Extraction [3.9659135716762894]
Document-level joint entity and relation extraction is a challenging information extraction problem.
We present a multi-task learning framework with bidirectional memory-like dependency between tasks.
Our empirical studies show that the proposed approach outperforms the existing methods.
arXiv Detail & Related papers (2023-07-14T12:26:56Z) - Quantifying Consistency and Information Loss for Causal Abstraction
Learning [16.17846886492361]
We introduce a family of interventional measures that an agent may use to evaluate such a trade-off.
We consider four measures suited for different tasks, analyze their properties, and propose algorithms to evaluate and learn causal abstractions.
arXiv Detail & Related papers (2023-05-07T19:10:28Z) - Weakly Supervised Semantic Segmentation via Alternative Self-Dual
Teaching [82.71578668091914]
This paper establishes a compact learning framework that embeds the classification and mask-refinement components into a unified deep model.
We propose a novel alternative self-dual teaching (ASDT) mechanism to encourage high-quality knowledge interaction.
arXiv Detail & Related papers (2021-12-17T11:56:56Z) - Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
Sentiment Analysis [96.46952672172021]
Bi-Bimodal Fusion Network (BBFN) is a novel end-to-end network that performs fusion on pairwise modality representations.
Model takes two bimodal pairs as input due to known information imbalance among modalities.
arXiv Detail & Related papers (2021-07-28T23:33:42Z) - Weakly Supervised Disentangled Generative Causal Representation Learning [21.392372783459013]
We show that previous methods with independent priors fail to disentangle causally related factors even under supervision.
We propose a new disentangled learning method that enables causal controllable generation and causal representation learning.
arXiv Detail & Related papers (2020-10-06T11:38:41Z) - Invariant Causal Prediction for Block MDPs [106.63346115341862]
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
We propose a method of invariant prediction to learn model-irrelevance state abstractions (MISA) that generalize to novel observations in the multi-environment setting.
arXiv Detail & Related papers (2020-03-12T21:03:01Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.