Learning Consistent Causal Abstraction Networks
- URL: http://arxiv.org/abs/2602.02623v1
- Date: Mon, 02 Feb 2026 16:16:29 GMT
- Title: Learning Consistent Causal Abstraction Networks
- Authors: Gabriele D'Acunto, Paolo Di Lorenzo, Sergio Barbarossa,
- Abstract summary: Causal artificial intelligence aims to enhance explainability, robustness, and trustworthiness in AI by leveraging structural causal models (SCMs)<n>We tackle the consistent abstraction network (CAN)<n>Experiments show competitive learning on synthetic data, and successful recovery of diverse CAN structures.
- Score: 14.952578725545344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal artificial intelligence aims to enhance explainability, trustworthiness, and robustness in AI by leveraging structural causal models (SCMs). In this pursuit, recent advances formalize network sheaves and cosheaves of causal knowledge. Pushing in the same direction, we tackle the learning of consistent causal abstraction network (CAN), a sheaf-theoretic framework where (i) SCMs are Gaussian, (ii) restriction maps are transposes of constructive linear causal abstractions (CAs) adhering to the semantic embedding principle, and (iii) edge stalks correspond--up to permutation--to the node stalks of more detailed SCMs. Our problem formulation separates into edge-specific local Riemannian problems and avoids nonconvex objectives. We propose an efficient search procedure, solving the local problems with SPECTRAL, our iterative method with closed-form updates and suitable for positive definite and semidefinite covariance matrices. Experiments on synthetic data show competitive performance in the CA learning task, and successful recovery of diverse CAN structures.
Related papers
- Structure-Aware Robust Counterfactual Explanations via Conditional Gaussian Network Classifiers [0.26999000177990923]
This work presents a structure-aware robustness-and-counterfactual search method based on conditional conditional graphs.<n>Results show that our method achieves strong consistency, with direct optimization of the original formulation providing especially stable dependencies.<n>The proposed framework lays the groundwork for future advances in counterfactual reasoning under noncyclic constraints.
arXiv Detail & Related papers (2026-02-08T15:51:45Z) - CoT-Seg: Rethinking Segmentation with Chain-of-Thought Reasoning and Self-Correction [50.67483317563736]
This paper aims to explore a system that can think step-by-step, look up information if needed, generate results, self-evaluate its own results, and refine the results.<n>We introduce CoT-Seg, a training-free framework that rethinks reasoning segmentation by combining chain-of-thought reasoning with self-correction.
arXiv Detail & Related papers (2026-01-24T11:41:54Z) - The Causal Abstraction Network: Theory and Learning [14.952578725545344]
Causal artificial intelligence aims to enhance explainability, robustness, and trustworthiness in AI by leveraging structural causal models (SCMs)<n>Recent advances formalize network sheaves of causal knowledge.<n>We introduce the causal abstraction network (CAN), a specific instance of such sheaves where (i)s are Gaussian, (ii) maps are transposes of constructive linear abstractions.
arXiv Detail & Related papers (2025-09-25T07:48:25Z) - Causal Abstraction Learning based on the Semantic Embedding Principle [8.867171632530908]
Structural causal models (SCMs) allow us to investigate complex systems at multiple levels of resolution.<n>We present a category-theoretic approach to SCMs that enables the learning of a CA by finding a morphism between the low- and high-level measures.
arXiv Detail & Related papers (2025-02-01T11:54:44Z) - Causal Order Discovery based on Monotonic SCMs [5.47587439763942]
We introduce a novel sequential procedure that directly identifies the causal order by iteratively detecting the root variable.
This method eliminates the need for sparsity assumptions and the associated optimization challenges.
We demonstrate the effectiveness of our approach in sequentially finding the root variable, comparing it to methods that maximize Jacobian sparsity.
arXiv Detail & Related papers (2024-10-24T03:15:11Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Causal Optimal Transport of Abstractions [8.642152250082368]
Causal abstraction (CA) theory establishes formal criteria for relating multiple structural causal models (SCMs) at different levels of granularity.
We propose COTA, the first method to learn abstraction maps from observational and interventional data without assuming complete knowledge of the underlying SCMs.
We extensively evaluate COTA on synthetic and real world problems, and showcase its advantages over non-causal, independent and aggregated COTA formulations.
arXiv Detail & Related papers (2023-12-13T12:54:34Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.