Exploring Causal Learning through Graph Neural Networks: An In-depth
Review
- URL: http://arxiv.org/abs/2311.14994v1
- Date: Sat, 25 Nov 2023 10:46:06 GMT
- Title: Exploring Causal Learning through Graph Neural Networks: An In-depth
Review
- Authors: Simi Job, Xiaohui Tao, Taotao Cai, Haoran Xie, Lin Li, Jianming Yong
and Qing Li
- Abstract summary: We introduce a novel taxonomy that encompasses various state-of-the-art GNN methods employed in studying causality.
GNNs are further categorized based on their applications in the causality domain.
This review also touches upon the application of causal learning across diverse sectors.
- Score: 12.936700685252145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In machine learning, exploring data correlations to predict outcomes is a
fundamental task. Recognizing causal relationships embedded within data is
pivotal for a comprehensive understanding of system dynamics, the significance
of which is paramount in data-driven decision-making processes. Beyond
traditional methods, there has been a surge in the use of graph neural networks
(GNNs) for causal learning, given their capabilities as universal data
approximators. Thus, a thorough review of the advancements in causal learning
using GNNs is both relevant and timely. To structure this review, we introduce
a novel taxonomy that encompasses various state-of-the-art GNN methods employed
in studying causality. GNNs are further categorized based on their applications
in the causality domain. We further provide an exhaustive compilation of
datasets integral to causal learning with GNNs to serve as a resource for
practical study. This review also touches upon the application of causal
learning across diverse sectors. We conclude the review with insights into
potential challenges and promising avenues for future exploration in this
rapidly evolving field of machine learning.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Towards Causal Classification: A Comprehensive Study on Graph Neural
Networks [9.360596957822471]
Graph Neural Networks (GNNs) for processing graph-structured data have expanded their potential for causal analysis.
Our study delves into nine benchmark graph classification models, testing their strength and versatility across seven datasets.
Our findings are instrumental in furthering the understanding and practical application of GNNs in diverse datacentric fields.
arXiv Detail & Related papers (2024-01-27T15:35:05Z) - Graph Neural Networks for Tabular Data Learning: A Survey with Taxonomy
and Directions [10.753191494611892]
We dive into Tabular Data Learning using Graph Neural Networks (GNNs)
GNNs have garnered significant interest and application across various Tabular Data Learning domains.
This survey serves as a resource for researchers and practitioners, offering a thorough understanding of GNNs' role in revolutionizing TDL.
arXiv Detail & Related papers (2024-01-04T08:49:10Z) - When Graph Neural Network Meets Causality: Opportunities, Methodologies and An Outlook [23.45046265345568]
Graph Neural Networks (GNNs) have emerged as powerful representation learning tools for capturing complex dependencies within diverse graph-structured data.
GNNs have raised serious concerns regarding their trustworthiness, including susceptibility to distribution shift, biases towards certain populations, and lack of explainability.
Integrating causal learning techniques into GNNs has sparked numerous ground-breaking studies since many GNN trustworthiness issues can be alleviated.
arXiv Detail & Related papers (2023-12-19T13:26:14Z) - Rethinking Causal Relationships Learning in Graph Neural Networks [24.7962807148905]
We introduce a lightweight and adaptable GNN module designed to strengthen GNNs' causal learning capabilities.
We empirically validate the effectiveness of the proposed module.
arXiv Detail & Related papers (2023-12-15T08:54:32Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - A Survey on Explainability of Graph Neural Networks [4.612101932762187]
Graph neural networks (GNNs) are powerful graph-based deep-learning models.
This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs.
arXiv Detail & Related papers (2023-06-02T23:36:49Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.