Learning with Few Labeled Nodes via Augmented Graph Self-Training
- URL: http://arxiv.org/abs/2208.12422v1
- Date: Fri, 26 Aug 2022 03:36:01 GMT
- Title: Learning with Few Labeled Nodes via Augmented Graph Self-Training
- Authors: Kaize Ding, Elnaz Nouri, Guoqing Zheng, Huan Liu and Ryen White
- Abstract summary: A GST (Augmented Graph Self-Training) framework is built with two new (i.e., structural and semantic) augmentation modules on top of a decoupled GST backbone.
We investigate whether this novel framework can learn an effective graph predictive model with extremely limited labeled nodes.
- Score: 36.97506256446519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is well known that the success of graph neural networks (GNNs) highly
relies on abundant human-annotated data, which is laborious to obtain and not
always available in practice. When only few labeled nodes are available, how to
develop highly effective GNNs remains understudied. Though self-training has
been shown to be powerful for semi-supervised learning, its application on
graph-structured data may fail because (1) larger receptive fields are not
leveraged to capture long-range node interactions, which exacerbates the
difficulty of propagating feature-label patterns from labeled nodes to
unlabeled nodes; and (2) limited labeled data makes it challenging to learn
well-separated decision boundaries for different node classes without
explicitly capturing the underlying semantic structure. To address the
challenges of capturing informative structural and semantic knowledge, we
propose a new graph data augmentation framework, AGST (Augmented Graph
Self-Training), which is built with two new (i.e., structural and semantic)
augmentation modules on top of a decoupled GST backbone. In this work, we
investigate whether this novel framework can learn an effective graph
predictive model with extremely limited labeled nodes. We conduct comprehensive
evaluations on semi-supervised node classification under different scenarios of
limited labeled-node data. The experimental results demonstrate the unique
contributions of the novel data augmentation framework for node classification
with few labeled data.
Related papers
- Multi-View Subgraph Neural Networks: Self-Supervised Learning with Scarce Labeled Data [24.628203785306233]
We present a novel learning framework called multi-view subgraph neural networks (Muse) for handling long-range dependencies.
By fusing two views of subgraphs, the learned representations can preserve the topological properties of the graph at large.
Experimental results show that Muse outperforms the alternative methods on node classification tasks with limited labeled data.
arXiv Detail & Related papers (2024-04-19T01:36:50Z) - Breaking the Entanglement of Homophily and Heterophily in
Semi-supervised Node Classification [25.831508778029097]
We introduce AMUD, which quantifies the relationship between node profiles and topology from a statistical perspective.
We also propose ADPA as a new directed graph learning paradigm for AMUD.
arXiv Detail & Related papers (2023-12-07T07:54:11Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Meta Propagation Networks for Graph Few-shot Semi-supervised Learning [39.96930762034581]
We propose a novel network architecture equipped with a novel meta-learning algorithm to solve this problem.
In essence, our framework Meta-PN infers high-quality pseudo labels on unlabeled nodes via a meta-learned label propagation strategy.
Our approach offers easy and substantial performance gains compared to existing techniques on various benchmark datasets.
arXiv Detail & Related papers (2021-12-18T00:11:56Z) - Noise-robust Graph Learning by Estimating and Leveraging Pairwise
Interactions [123.07967420310796]
This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs.
PI-GNN relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels.
Our proposed framework PI-GNN contributes two novel components: (1) a confidence-aware PI estimation model that adaptively estimates the PI labels, and (2) a decoupled training approach that leverages the estimated PI labels.
arXiv Detail & Related papers (2021-06-14T14:23:08Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - When Contrastive Learning Meets Active Learning: A Novel Graph Active
Learning Paradigm with Self-Supervision [19.938379604834743]
This paper studies active learning (AL) on graphs, whose purpose is to discover the most informative nodes to maximize the performance of graph neural networks (GNNs)
Motivated by the success of contrastive learning (CL), we propose a novel paradigm that seamlessly integrates graph AL with CL.
Comprehensive, confounding-free experiments on five public datasets demonstrate the superiority of our method over state-of-the-arts.
arXiv Detail & Related papers (2020-10-30T06:20:07Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.