Causal Graph Recovery in Neuroimaging through Answer Set Programming
- URL: http://arxiv.org/abs/2506.09286v1
- Date: Tue, 10 Jun 2025 22:51:30 GMT
- Title: Causal Graph Recovery in Neuroimaging through Answer Set Programming
- Authors: Mohammadsajad Abavisani, Kseniya Solovyeva, David Danks, Vince Calhoun, Sergey Plis,
- Abstract summary: We use a constraint optimization approach, specifically answer set programming (ASP), to find the optimal set of answers.<n>ASP not only identifies the most probable underlying graph, but also provides an equivalence class of possible graphs for expert selection.<n>We show how our method can be used as a meta-approach on top of established methods to obtain, on average, 12% improvement in F1 score.
- Score: 2.762303951933521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning graphical causal structures from time series data presents significant challenges, especially when the measurement frequency does not match the causal timescale of the system. This often leads to a set of equally possible underlying causal graphs due to information loss from sub-sampling (i.e., not observing all possible states of the system throughout time). Our research addresses this challenge by incorporating the effects of sub-sampling in the derivation of causal graphs, resulting in more accurate and intuitive outcomes. We use a constraint optimization approach, specifically answer set programming (ASP), to find the optimal set of answers. ASP not only identifies the most probable underlying graph, but also provides an equivalence class of possible graphs for expert selection. In addition, using ASP allows us to leverage graph theory to further prune the set of possible solutions, yielding a smaller, more accurate answer set significantly faster than traditional approaches. We validate our approach on both simulated data and empirical structural brain connectivity, and demonstrate its superiority over established methods in these experiments. We further show how our method can be used as a meta-approach on top of established methods to obtain, on average, 12% improvement in F1 score. In addition, we achieved state of the art results in terms of precision and recall of reconstructing causal graph from sub-sampled time series data. Finally, our method shows robustness to varying degrees of sub-sampling on realistic simulations, whereas other methods perform worse for higher rates of sub-sampling.
Related papers
- Sample Efficient Bayesian Learning of Causal Graphs from Interventions [6.823521786512908]
This study considers a Bayesian approach for learning causal graphs with limited interventional samples.
We show theoretically that our proposed algorithm will return the true causal graph with high probability.
We present a case study showing how this algorithm could be modified to answer more general causal questions without learning the whole graph.
arXiv Detail & Related papers (2024-10-26T05:47:56Z) - Scalable and Flexible Causal Discovery with an Efficient Test for Adjacency [48.769884734826974]
We build a scalable and flexible method to evaluate if two variables are adjacent in a causal graph.
The Differentiable Adjacency Test replaces an exponential number of tests with a provably equivalent relaxed problem.
We also build a graph learning method based on DAT, DAT-Graph, that can also learn from data with interventions.
arXiv Detail & Related papers (2024-06-13T14:39:40Z) - Adaptive Online Experimental Design for Causal Discovery [9.447864414136905]
Causal discovery aims to uncover cause-and-effect relationships encoded in causal graphs.
We focus on data interventional efficiency and formalize causal discovery from the perspective of online learning.
We propose a track-and-stop causal discovery algorithm that adaptively selects interventions from the graph separating system.
arXiv Detail & Related papers (2024-05-19T13:26:33Z) - Harnessing the Power of Large Language Model for Uncertainty Aware Graph Processing [24.685942503019948]
We introduce a novel approach that harnesses the power of a large language model (LLM) to provide a confidence score on the generated answer.
We experiment with our approach on two graph processing tasks: few-shot knowledge graph completion and graph classification.
Our confidence measure achieves an AUC of 0.8 or higher on seven out of the ten datasets in predicting the correctness of the answer generated by LLM.
arXiv Detail & Related papers (2024-03-31T07:38:39Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Efficient Variational Bayesian Structure Learning of Dynamic Graphical
Models [19.591265962713837]
Estimating time-varying graphical models is of paramount importance in various social, financial, biological, and engineering systems.
Existing methods require extensive tuning of parameters that control the graph sparsity and temporal smoothness.
We propose a low-complexity tuning-free Bayesian approach, named BADGE.
arXiv Detail & Related papers (2020-09-16T14:19:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.