Semi-Supervised Graph Learning Meets Dimensionality Reduction
- URL: http://arxiv.org/abs/2203.12522v1
- Date: Wed, 23 Mar 2022 16:31:53 GMT
- Title: Semi-Supervised Graph Learning Meets Dimensionality Reduction
- Authors: Alex Morehead, Watchanan Chantapakul, Jianlin Cheng
- Abstract summary: Semi-supervised learning (SSL) has recently received increased attention from machine learning researchers.
In this work, we investigate the use of dimensionality reduction techniques such as PCA, t-SNE, and UMAP to see their effect on the performance of graph neural networks (GNNs)
Our benchmarks and clustering visualizations demonstrate that, under certain conditions, employing a priori and a posteriori dimensionality reduction to GNN inputs and outputs, respectively, can simultaneously improve the effectiveness of semi-supervised node label propagation and node clustering.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semi-supervised learning (SSL) has recently received increased attention from
machine learning researchers. By enabling effective propagation of known labels
in graph-based deep learning (GDL) algorithms, SSL is poised to become an
increasingly used technique in GDL in the coming years. However, there are
currently few explorations in the graph-based SSL literature on exploiting
classical dimensionality reduction techniques for improved label propagation.
In this work, we investigate the use of dimensionality reduction techniques
such as PCA, t-SNE, and UMAP to see their effect on the performance of graph
neural networks (GNNs) designed for semi-supervised propagation of node labels.
Our study makes use of benchmark semi-supervised GDL datasets such as the Cora
and Citeseer datasets to allow meaningful comparisons of the representations
learned by each algorithm when paired with a dimensionality reduction
technique. Our comprehensive benchmarks and clustering visualizations
quantitatively and qualitatively demonstrate that, under certain conditions,
employing a priori and a posteriori dimensionality reduction to GNN inputs and
outputs, respectively, can simultaneously improve the effectiveness of
semi-supervised node label propagation and node clustering. Our source code is
freely available on GitHub.
Related papers
- GNUMAP: A Parameter-Free Approach to Unsupervised Dimensionality Reduction via Graph Neural Networks [0.8192907805418583]
We show thatMAP consistently outperforms existing state-of-the-art GNN embedding methods in a variety of contexts.
We introduce a robust and parameter-free method for unsupervised node representation learning that merges the traditional UMAP approach with the expressivity of the GNN framework.
arXiv Detail & Related papers (2024-07-30T22:58:23Z) - Graph Convolutional Network For Semi-supervised Node Classification With Subgraph Sketching [0.27624021966289597]
We propose the Graph-Learning-Dual Graph Convolutional Neural Network called GLDGCN.
We apply GLDGCN to the semi-supervised node classification task.
Compared with the baseline methods, we achieve higher classification accuracy on three citation networks.
arXiv Detail & Related papers (2024-04-19T09:08:12Z) - On the Generalization Capability of Temporal Graph Learning Algorithms:
Theoretical Insights and a Simpler Method [59.52204415829695]
Temporal Graph Learning (TGL) has become a prevalent technique across diverse real-world applications.
This paper investigates the generalization ability of different TGL algorithms.
We propose a simplified TGL network, which enjoys a small generalization error, improved overall performance, and lower model complexity.
arXiv Detail & Related papers (2024-02-26T08:22:22Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Steering Graph Neural Networks with Pinning Control [23.99873285634287]
We propose a control principle to supervise representation learning by leveraging the prototypes (i.e., class centers) of labeled data.
Treating graph learning as a discrete dynamic process and the prototypes of labeled data as "desired" class representations, we borrow the pinning control idea from automatic control theory.
Our experiments demonstrate that the proposed PCGCN model achieves better performances than deep GNNs and other competitive heterophily-oriented methods.
arXiv Detail & Related papers (2023-03-02T13:50:23Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
via Gaussian Processes [144.6048446370369]
Graph convolutional neural networks(GCNs) have recently demonstrated promising results on graph-based semi-supervised classification.
We propose a GP regression model via GCNs(GPGC) for graph-based semi-supervised learning.
We conduct extensive experiments to evaluate GPGC and demonstrate that it outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-02-26T10:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.