Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced
training
- URL: http://arxiv.org/abs/2104.09376v1
- Date: Mon, 19 Apr 2021 15:08:06 GMT
- Title: Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced
training
- Authors: Chuxiong Sun
- Abstract summary: It is hard to directly implement Graph Neural Networks (GNNs) on large scaled graphs.
We propose scalable and Adaptive Graph Neural Networks (SAGN)
We propose Self-Label-Enhance (SLE) framework combining self-training approach and label propagation in depth.
- Score: 1.2183405753834562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is hard to directly implement Graph Neural Networks (GNNs) on large scaled
graphs. Besides of existed neighbor sampling techniques, scalable methods
decoupling graph convolutions and other learnable transformations into
preprocessing and post classifier allow normal minibatch training. By replacing
redundant concatenation operation with attention mechanism in SIGN, we propose
Scalable and Adaptive Graph Neural Networks (SAGN). SAGN can adaptively gather
neighborhood information among different hops. To further improve scalable
models on semi-supervised learning tasks, we propose Self-Label-Enhance (SLE)
framework combining self-training approach and label propagation in depth. We
add base model with a scalable node label module. Then we iteratively train
models and enhance train set in several stages. To generate input of node label
module, we directly apply label propagation based on one-hot encoded label
vectors without inner random masking. We find out that empirically the label
leakage has been effectively alleviated after graph convolutions. The hard
pseudo labels in enhanced train set participate in label propagation with true
labels. Experiments on both inductive and transductive datasets demonstrate
that, compared with other sampling-based and sampling-free methods, SAGN
achieves better or comparable results and SLE can further improve performance.
Related papers
- Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Structure-Aware Label Smoothing for Graph Neural Networks [39.97741949184259]
Representing a label distribution as a one-hot vector is a common practice in training node classification models.
We propose a novel SALS (textitStructure-Aware Label Smoothing) method as an enhancement component to popular node classification models.
arXiv Detail & Related papers (2021-12-01T13:48:58Z) - Adaptive Label Smoothing To Regularize Large-Scale Graph Training [46.00927775402987]
We propose the adaptive label smoothing (ALS) method to replace the one-hot hard labels with smoothed ones.
ALS propagates node labels to aggregate the neighborhood label distribution in a pre-processing step, and then updates the optimal smoothed labels online to adapt to specific graph structure.
arXiv Detail & Related papers (2021-08-30T23:51:31Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Knowledge-Guided Multi-Label Few-Shot Learning for General Image
Recognition [75.44233392355711]
KGGR framework exploits prior knowledge of statistical label correlations with deep neural networks.
It first builds a structured knowledge graph to correlate different labels based on statistical label co-occurrence.
Then, it introduces the label semantics to guide learning semantic-specific features.
It exploits a graph propagation network to explore graph node interactions.
arXiv Detail & Related papers (2020-09-20T15:05:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.