Graph Similarity Regularized Softmax for Semi-Supervised Node Classification
- URL: http://arxiv.org/abs/2409.13544v1
- Date: Fri, 20 Sep 2024 14:38:16 GMT
- Title: Graph Similarity Regularized Softmax for Semi-Supervised Node Classification
- Authors: Yiming Yang, Jun Liu, Wei Wan,
- Abstract summary: Graph Neural Networks (GNNs) are powerful deep learning models designed for graph-structured data.
We propose a graph similarity regularized softmax for GNNs in semi-supervised node classification.
- Score: 33.297649538686045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are powerful deep learning models designed for graph-structured data, demonstrating effectiveness across a wide range of applications.The softmax function is the most commonly used classifier for semi-supervised node classification. However, the softmax function lacks spatial information of the graph structure. In this paper, we propose a graph similarity regularized softmax for GNNs in semi-supervised node classification. By incorporating non-local total variation (TV) regularization into the softmax activation function, we can more effectively capture the spatial information inherent in graphs. The weights in the non-local gradient and divergence operators are determined based on the graph's adjacency matrix. We apply the proposed method into the architecture of GCN and GraphSAGE, testing them on citation and webpage linking datasets, respectively. Numerical experiments demonstrate its good performance in node classification and generalization capabilities. These results indicate that the graph similarity regularized softmax is effective on both assortative and disassortative graphs.
Related papers
- Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - GRAN is superior to GraphRNN: node orderings, kernel- and graph
embeddings-based metrics for graph generators [0.6816499294108261]
We study kernel-based metrics on distributions of graph invariants and manifold-based and kernel-based metrics in graph embedding space.
We compare GraphRNN and GRAN, two well-known generative models for graphs, and unveil the influence of node orderings.
arXiv Detail & Related papers (2023-07-13T12:07:39Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Pseudoinverse Graph Convolutional Networks: Fast Filters Tailored for
Large Eigengaps of Dense Graphs and Hypergraphs [0.0]
Graph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised classification on graph-based datasets.
We propose a new GCN variant whose three-part filter space is targeted at dense graphs.
arXiv Detail & Related papers (2020-08-03T08:48:41Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Unsupervised Graph Representation by Periphery and Hierarchical
Information Maximization [18.7475578342125]
Invent of graph neural networks has improved the state-of-the-art for both node and the entire graph representation in a vector space.
For the entire graph representation, most of existing graph neural networks are trained on a graph classification loss in a supervised way.
We propose an unsupervised graph neural network to generate a vector representation of an entire graph in this paper.
arXiv Detail & Related papers (2020-06-08T15:50:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.