Distributional Signals for Node Classification in Graph Neural Networks
- URL: http://arxiv.org/abs/2304.03507v1
- Date: Fri, 7 Apr 2023 06:54:42 GMT
- Title: Distributional Signals for Node Classification in Graph Neural Networks
- Authors: Feng Ji, See Hian Lee, Kai Zhao, Wee Peng Tay, Jielong Yang
- Abstract summary: In graph neural networks (GNNs) both node features and labels are examples of graph signals, a key notion in graph signal processing (GSP)
In our framework, we work with the distributions of node labels instead of their values and propose notions of smoothness and non-uniformity of such distributional graph signals.
We then propose a general regularization method for GNNs that allows us to encode distributional smoothness and non-uniformity of the model output in semi-supervised node classification tasks.
- Score: 36.30743671968087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In graph neural networks (GNNs), both node features and labels are examples
of graph signals, a key notion in graph signal processing (GSP). While it is
common in GSP to impose signal smoothness constraints in learning and
estimation tasks, it is unclear how this can be done for discrete node labels.
We bridge this gap by introducing the concept of distributional graph signals.
In our framework, we work with the distributions of node labels instead of
their values and propose notions of smoothness and non-uniformity of such
distributional graph signals. We then propose a general regularization method
for GNNs that allows us to encode distributional smoothness and non-uniformity
of the model output in semi-supervised node classification tasks. Numerical
experiments demonstrate that our method can significantly improve the
performance of most base GNN models in different problem settings.
Related papers
- Scalable and Consistent Graph Neural Networks for Distributed Mesh-based Data-driven Modeling [0.0]
This work develops a distributed graph neural network (GNN) methodology for mesh-based modeling applications.
consistency refers to the fact that a GNN trained and evaluated on one rank (one large graph) is arithmetically equivalent to evaluations on multiple ranks (a partitioned graph)
It is shown how the NekRS mesh partitioning can be linked to the distributed GNN training and inference routines, resulting in a scalable mesh-based data-driven modeling workflow.
arXiv Detail & Related papers (2024-10-02T15:22:27Z) - Revisiting Neighborhood Aggregation in Graph Neural Networks for Node Classification using Statistical Signal Processing [4.184419714263417]
We reevaluating the concept of neighborhood aggregation, which is a fundamental component in graph neural networks (GNNs)
Our analysis reveals conceptual flaws within certain benchmark GNN models when operating under the assumption of edge-independent node labels.
arXiv Detail & Related papers (2024-07-21T22:37:24Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Label Propagation across Graphs: Node Classification using Graph Neural
Tangent Kernels [12.445026956430826]
Graph neural networks (GNNs) have achieved superior performance on node classification tasks.
Our work considers a challenging inductive setting where a set of labeled graphs are available for training while the unlabeled target graph is completely separate.
Under the implicit assumption that the testing and training graphs come from similar distributions, our goal is to develop a labeling function that generalizes to unobserved connectivity structures.
arXiv Detail & Related papers (2021-10-07T19:42:35Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Graphon Pooling in Graph Neural Networks [169.09536309161314]
Graph neural networks (GNNs) have been used effectively in different applications involving the processing of signals on irregular structures modeled by graphs.
We propose a new strategy for pooling and sampling on GNNs using graphons which preserves the spectral properties of the graph.
arXiv Detail & Related papers (2020-03-03T21:04:20Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.