Introducing Expertise Logic into Graph Representation Learning from A
Causal Perspective
- URL: http://arxiv.org/abs/2301.08496v2
- Date: Wed, 24 May 2023 03:22:49 GMT
- Title: Introducing Expertise Logic into Graph Representation Learning from A
Causal Perspective
- Authors: Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Xingzhe Su, Fengge
Wu, Changwen Zheng, Fuchun Sun
- Abstract summary: We propose a novel graph representation learning method to incorporate human expert knowledge into GNN models.
The proposed method ensures that the GNN model can not only acquire the expertise held by human experts but also engage in end-to-end learning from datasets.
- Score: 19.6045119188211
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Benefiting from the injection of human prior knowledge, graphs, as derived
discrete data, are semantically dense so that models can efficiently learn the
semantic information from such data. Accordingly, graph neural networks (GNNs)
indeed achieve impressive success in various fields. Revisiting the GNN
learning paradigms, we discover that the relationship between human expertise
and the knowledge modeled by GNNs still confuses researchers. To this end, we
introduce motivating experiments and derive an empirical observation that the
GNNs gradually learn human expertise in general domains. By further observing
the ramifications of introducing expertise logic into graph representation
learning, we conclude that leading the GNNs to learn human expertise can
improve the model performance. Hence, we propose a novel graph representation
learning method to incorporate human expert knowledge into GNN models. The
proposed method ensures that the GNN model can not only acquire the expertise
held by human experts but also engage in end-to-end learning from datasets.
Plentiful experiments on the crafted and real-world domains support the
consistent effectiveness of the proposed method.
Related papers
- A Self-guided Multimodal Approach to Enhancing Graph Representation Learning for Alzheimer's Diseases [45.59286036227576]
Graph neural networks (GNNs) are powerful machine learning models designed to handle irregularly structured data.
This paper presents a self-guided, knowledge-infused multimodal GNN that autonomously incorporates domain knowledge into the model development process.
Our approach conceptualizes domain knowledge as natural language and introduces a specialized multimodal GNN capable of leveraging this uncurated knowledge.
arXiv Detail & Related papers (2024-12-09T05:16:32Z) - Stealing Training Graphs from Graph Neural Networks [54.52392250297907]
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks.
As neural networks can memorize the training samples, the model parameters of GNNs have a high risk of leaking private training data.
We investigate a novel problem of stealing graphs from trained GNNs.
arXiv Detail & Related papers (2024-11-17T23:15:36Z) - Rethinking Causal Relationships Learning in Graph Neural Networks [24.7962807148905]
We introduce a lightweight and adaptable GNN module designed to strengthen GNNs' causal learning capabilities.
We empirically validate the effectiveness of the proposed module.
arXiv Detail & Related papers (2023-12-15T08:54:32Z) - Exploring Causal Learning through Graph Neural Networks: An In-depth
Review [12.936700685252145]
We introduce a novel taxonomy that encompasses various state-of-the-art GNN methods employed in studying causality.
GNNs are further categorized based on their applications in the causality domain.
This review also touches upon the application of causal learning across diverse sectors.
arXiv Detail & Related papers (2023-11-25T10:46:06Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - A Survey on Explainability of Graph Neural Networks [4.612101932762187]
Graph neural networks (GNNs) are powerful graph-based deep-learning models.
This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs.
arXiv Detail & Related papers (2023-06-02T23:36:49Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Investigating Transfer Learning in Graph Neural Networks [2.320417845168326]
Graph neural networks (GNNs) build on the success of deep learning models by extending them for use in graph spaces.
transfer learning has proven extremely successful for traditional deep learning problems: resulting in faster training and improved performance.
This research demonstrates that transfer learning is effective with GNNs, and describes how source tasks and the choice of GNN impact the ability to learn generalisable knowledge.
arXiv Detail & Related papers (2022-02-01T20:33:15Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.