Learning Strong Graph Neural Networks with Weak Information
- URL: http://arxiv.org/abs/2305.18457v1
- Date: Mon, 29 May 2023 04:51:09 GMT
- Title: Learning Strong Graph Neural Networks with Weak Information
- Authors: Yixin Liu, Kaize Ding, Jianling Wang, Vincent Lee, Huan Liu, Shirui
Pan
- Abstract summary: We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
- Score: 64.64996100343602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have exhibited impressive performance in many
graph learning tasks. Nevertheless, the performance of GNNs can deteriorate
when the input graph data suffer from weak information, i.e., incomplete
structure, incomplete features, and insufficient labels. Most prior studies,
which attempt to learn from the graph data with a specific type of weak
information, are far from effective in dealing with the scenario where diverse
data deficiencies exist and mutually affect each other. To fill the gap, in
this paper, we aim to develop an effective and principled approach to the
problem of graph learning with weak information (GLWI). Based on the findings
from our empirical analysis, we derive two design focal points for solving the
problem of GLWI, i.e., enabling long-range propagation in GNNs and allowing
information propagation to those stray nodes isolated from the largest
connected component. Accordingly, we propose D$^2$PT, a dual-channel GNN
framework that performs long-range information propagation not only on the
input graph with incomplete structure, but also on a global graph that encodes
global semantic similarities. We further develop a prototype contrastive
alignment algorithm that aligns the class-level prototypes learned from two
channels, such that the two different information propagation processes can
mutually benefit from each other and the finally learned model can well handle
the GLWI problem. Extensive experiments on eight real-world benchmark datasets
demonstrate the effectiveness and efficiency of our proposed methods in various
GLWI scenarios.
Related papers
- MDS-GNN: A Mutual Dual-Stream Graph Neural Network on Graphs with Incomplete Features and Structure [8.00268216176428]
Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing and learning representations from graph-structured data.
A crucial prerequisite for the outstanding performance of GNNs is the availability of complete graph information.
This study proposes a mutual dual-stream graph neural network (MDS-GNN) which implements a mutual benefit learning between features and structure.
arXiv Detail & Related papers (2024-08-09T03:42:56Z) - Invariant Graph Learning Meets Information Bottleneck for Out-of-Distribution Generalization [9.116601683256317]
In this work, we propose a novel framework, called Invariant Graph Learning based on Information bottleneck theory (InfoIGL)
Specifically, InfoIGL introduces a redundancy filter to compress task-irrelevant information related to environmental factors.
Experiments on both synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance under OOD generalization.
arXiv Detail & Related papers (2024-08-03T07:38:04Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Loss-aware Curriculum Learning for Heterogeneous Graph Neural Networks [30.333265803394998]
This paper investigates the application of curriculum learning techniques to improve the performance of Heterogeneous Graph Neural Networks (GNNs)
To better classify the quality of the data, we design a loss-aware training schedule, named LTS, that measures the quality of every nodes of the data.
Our findings demonstrate the efficacy of curriculum learning in enhancing HGNNs capabilities for analyzing complex graph-structured data.
arXiv Detail & Related papers (2024-02-29T05:44:41Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Measuring and Sampling: A Metric-guided Subgraph Learning Framework for
Graph Neural Network [11.017348743924426]
We propose a Metric-Guided (MeGuide) subgraph learning framework for Graph neural network (GNN)
MeGuide employs two novel metrics: Feature Smoothness and Connection Failure Distance to guide the subgraph sampling and mini-batch based training.
We demonstrate the effectiveness and efficiency of MeGuide in training various GNNs on multiple datasets.
arXiv Detail & Related papers (2021-12-30T11:00:00Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.