Multi-granularity Causal Structure Learning
- URL: http://arxiv.org/abs/2312.05549v2
- Date: Tue, 12 Dec 2023 13:06:33 GMT
- Title: Multi-granularity Causal Structure Learning
- Authors: Jiaxuan Liang, Jun Wang, Guoxian Yu, Shuyin Xia, Guoyin Wang
- Abstract summary: We develop MgCSL (Multi-granularity Causal Structure Learning), which first leverages sparse auto-encoder to explore coarse-graining strategies and causal abstractions.
MgCSL then takes multi-granularity variables as inputs to train multilayer perceptrons and to delve the causality between variables.
Experimental results show that MgCSL outperforms competitive baselines, and finds out explainable causal connections on fMRI datasets.
- Score: 23.125497987255237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unveil, model, and comprehend the causal mechanisms underpinning natural
phenomena stand as fundamental endeavors across myriad scientific disciplines.
Meanwhile, new knowledge emerges when discovering causal relationships from
data. Existing causal learning algorithms predominantly focus on the isolated
effects of variables, overlook the intricate interplay of multiple variables
and their collective behavioral patterns. Furthermore, the ubiquity of
high-dimensional data exacts a substantial temporal cost for causal algorithms.
In this paper, we develop a novel method called MgCSL (Multi-granularity Causal
Structure Learning), which first leverages sparse auto-encoder to explore
coarse-graining strategies and causal abstractions from micro-variables to
macro-ones. MgCSL then takes multi-granularity variables as inputs to train
multilayer perceptrons and to delve the causality between variables. To enhance
the efficacy on high-dimensional data, MgCSL introduces a simplified acyclicity
constraint to adeptly search the directed acyclic graph among variables.
Experimental results show that MgCSL outperforms competitive baselines, and
finds out explainable causal connections on fMRI datasets.
Related papers
- Targeted Cause Discovery with Data-Driven Learning [66.86881771339145]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.
We employ a neural network trained to identify causality through supervised learning on simulated data.
Empirical results demonstrate the effectiveness of our method in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - Weakly-supervised causal discovery based on fuzzy knowledge and complex data complementarity [4.637772575470497]
We propose a novel weakly-supervised fuzzy knowledge and data co-driven causal discovery method named KEEL.
KEEL adopts a fuzzy causal knowledge schema to encapsulate diverse types of fuzzy knowledge, and forms corresponding weakened constraints.
It can enhance the generalization and robustness of causal discovery, especially in high-dimensional and small-sample scenarios.
arXiv Detail & Related papers (2024-05-14T15:39:22Z) - ALCM: Autonomous LLM-Augmented Causal Discovery Framework [2.1470800327528843]
We introduce a new framework, named Autonomous LLM-Augmented Causal Discovery Framework (ALCM), to synergize data-driven causal discovery algorithms and Large Language Models.
The ALCM consists of three integral components: causal structure learning, causal wrapper, and LLM-driven causal refiner.
We evaluate the ALCM framework by implementing two demonstrations on seven well-known datasets.
arXiv Detail & Related papers (2024-05-02T21:27:45Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Targeted Reduction of Causal Models [55.11778726095353]
Causal Representation Learning offers a promising avenue to uncover interpretable causal patterns in simulations.
We introduce Targeted Causal Reduction (TCR), a method for condensing complex intervenable models into a concise set of causal factors.
Its ability to generate interpretable high-level explanations from complex models is demonstrated on toy and mechanical systems.
arXiv Detail & Related papers (2023-11-30T15:46:22Z) - Causal disentanglement of multimodal data [1.589226862328831]
We introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships.
Our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.
arXiv Detail & Related papers (2023-10-27T20:30:11Z) - Heteroscedastic Causal Structure Learning [2.566492438263125]
We tackle the heteroscedastic causal structure learning problem under Gaussian noises.
By exploiting the normality of the causal mechanisms, we can recover a valid causal ordering.
The result is HOST (Heteroscedastic causal STructure learning), a simple yet effective causal structure learning algorithm.
arXiv Detail & Related papers (2023-07-16T07:53:16Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - Deep Learning of Causal Structures in High Dimensions [0.6021787236982659]
We propose a deep neural architecture for learning causal relationships between variables from a combination of empirical data and prior causal knowledge.
We combine convolutional and graph neural networks within a causal risk framework to provide a flexible and scalable approach.
arXiv Detail & Related papers (2022-12-09T14:12:47Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.