Regularizing Semi-supervised Graph Convolutional Networks with a
Manifold Smoothness Loss
- URL: http://arxiv.org/abs/2002.07031v1
- Date: Tue, 11 Feb 2020 08:51:53 GMT
- Title: Regularizing Semi-supervised Graph Convolutional Networks with a
Manifold Smoothness Loss
- Authors: Qilin Li, Wanquan Liu, Ling Li
- Abstract summary: We propose an unsupervised manifold smoothness loss defined with respect to the graph structure, which can be added to the loss function as a regularization.
We conduct experiments on multi-layer perceptron and existing graph networks, and demonstrate that adding the proposed loss can improve the performance consistently.
- Score: 12.948899990826426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing graph convolutional networks focus on the neighborhood aggregation
scheme. When applied to semi-supervised learning, they often suffer from the
overfitting problem as the networks are trained with the cross-entropy loss on
a small potion of labeled data. In this paper, we propose an unsupervised
manifold smoothness loss defined with respect to the graph structure, which can
be added to the loss function as a regularization. We draw connections between
the proposed loss with an iterative diffusion process, and show that minimizing
the loss is equivalent to aggregate neighbor predictions with infinite layers.
We conduct experiments on multi-layer perceptron and existing graph networks,
and demonstrate that adding the proposed loss can improve the performance
consistently.
Related papers
- NetDiff: Deep Graph Denoising Diffusion for Ad Hoc Network Topology Generation [1.6768151308423371]
We introduce NetDiff, a graph denoising diffusion probabilistic architecture that generates wireless ad hoc network link topologies.
Our results show that the generated links are realistic, present structural properties similar to the dataset graphs', and require only minor corrections and verification steps to be operational.
arXiv Detail & Related papers (2024-10-09T15:39:49Z) - Gegenbauer Graph Neural Networks for Time-varying Signal Reconstruction [4.6210788730570584]
Time-varying graph signals are a critical problem in machine learning and signal processing with broad applications.
We propose a novel approach that incorporates a learning module to enhance the accuracy of the downstream task.
We conduct extensive experiments on real datasets to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-03-28T19:29:17Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Unsupervised Graph-based Learning Method for Sub-band Allocation in 6G Subnetworks [2.0583251142940377]
We present an unsupervised approach for frequency sub-band allocation in wireless networks using graph-based learning.
We model the subnetwork deployment as a conflict graph and propose an unsupervised learning approach inspired by the graph colouring and the Potts model to optimize the sub-band allocation.
arXiv Detail & Related papers (2023-12-13T12:57:55Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Mixing between the Cross Entropy and the Expectation Loss Terms [89.30385901335323]
Cross entropy loss tends to focus on hard to classify samples during training.
We show that adding to the optimization goal the expectation loss helps the network to achieve better accuracy.
Our experiments show that the new training protocol improves performance across a diverse set of classification domains.
arXiv Detail & Related papers (2021-09-12T23:14:06Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - The Implicit Bias of Gradient Descent on Separable Data [44.98410310356165]
We show the predictor converges to the direction of the max-margin (hard margin SVM) solution.
This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero.
arXiv Detail & Related papers (2017-10-27T21:47:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.