Wide & Deep Learning for Node Classification
- URL: http://arxiv.org/abs/2505.02020v1
- Date: Sun, 04 May 2025 07:53:16 GMT
- Title: Wide & Deep Learning for Node Classification
- Authors: Yancheng Chen, Wenguo Yang, Zhipeng Jiang,
- Abstract summary: Graph convolutional networks (GCNs) remain dominant in node classification tasks.<n>We propose a flexible framework GCNIII, which incorporates three techniques: Intersect memory, Initial residual and Identity mapping.<n>We provide empirical evidence showing that GCNIII can more effectively balance the trade-off between over-fitting and over-generalization.
- Score: 0.7373617024876725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wide & Deep, a simple yet effective learning architecture for recommendation systems developed by Google, has had a significant impact in both academia and industry due to its combination of the memorization ability of generalized linear models and the generalization ability of deep models. Graph convolutional networks (GCNs) remain dominant in node classification tasks; however, recent studies have highlighted issues such as heterophily and expressiveness, which focus on graph structure while seemingly neglecting the potential role of node features. In this paper, we propose a flexible framework GCNIII, which leverages the Wide & Deep architecture and incorporates three techniques: Intersect memory, Initial residual and Identity mapping. We provide comprehensive empirical evidence showing that GCNIII can more effectively balance the trade-off between over-fitting and over-generalization on various semi- and full- supervised tasks. Additionally, we explore the use of large language models (LLMs) for node feature engineering to enhance the performance of GCNIII in cross-domain node classification tasks. Our implementation is available at https://github.com/CYCUCAS/GCNIII.
Related papers
- Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space [33.677431350509224]
We introduce MbaGCN, a novel graph convolutional architecture that draws inspiration from the Mamba paradigm.<n>MbaGCN presents a new backbone for GNNs, consisting of three key components: the Message Aggregation Layer, the Selective State Space Transition Layer, and the Node State Prediction Layer.
arXiv Detail & Related papers (2025-01-26T09:09:44Z) - How to Make LLMs Strong Node Classifiers? [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, such as Graph Neural Networks (GNNs) and Graph Transformers (GTs)<n>We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art (SOTA) GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z) - Beyond Localized Graph Neural Networks: An Attributed Motif
Regularization Framework [6.790281989130923]
InfoMotif is a new semi-supervised, motif-regularized, learning framework over graphs.
We overcome two key limitations of message passing in graph neural networks (GNNs)
We show significant gains (3-10% accuracy) across six diverse, real-world datasets.
arXiv Detail & Related papers (2020-09-11T02:03:09Z) - Dynamic GCN: Context-enriched Topology Learning for Skeleton-based
Action Recognition [40.467040910143616]
We propose Dynamic GCN, in which a novel convolutional neural network named Contextencoding Network (CeN) is introduced to learn skeleton topology automatically.
CeN is extremely lightweight yet effective, and can be embedded into a graph convolutional layer.
Dynamic GCN achieves better performance with $2times$$4times$ fewer FLOPs than existing methods.
arXiv Detail & Related papers (2020-07-29T09:12:06Z) - AM-GCN: Adaptive Multi-channel Graph Convolutional Networks [85.0332394224503]
We study whether Graph Convolutional Networks (GCNs) can optimally integrate node features and topological structures in a complex graph with rich information.
We propose an adaptive multi-channel graph convolutional networks for semi-supervised classification (AM-GCN)
Our experiments show that AM-GCN extracts the most correlated information from both node features and topological structures substantially.
arXiv Detail & Related papers (2020-07-05T08:16:03Z) - Simple and Deep Graph Convolutional Networks [63.76221532439285]
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data.
Despite their success, most of the current GCN models are shallow, due to the em over-smoothing problem.
We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques.
arXiv Detail & Related papers (2020-07-04T16:18:06Z) - Graph Prototypical Networks for Few-shot Learning on Attributed Networks [72.31180045017835]
We propose a graph meta-learning framework -- Graph Prototypical Networks (GPN)
GPN is able to perform textitmeta-learning on an attributed network and derive a highly generalizable model for handling the target classification task.
arXiv Detail & Related papers (2020-06-23T04:13:23Z) - DeeperGCN: All You Need to Train Deeper GCNs [66.64739331859226]
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.
Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper.
This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs.
arXiv Detail & Related papers (2020-06-13T23:00:22Z) - Cross-GCN: Enhancing Graph Convolutional Network with $k$-Order Feature
Interactions [153.6357310444093]
Graph Convolutional Network (GCN) is an emerging technique that performs learning and reasoning on graph data.
We argue that existing designs of GCN forgo modeling cross features, making GCN less effective for tasks or data where cross features are important.
We design a new operator named Cross-feature Graph Convolution, which explicitly models the arbitrary-order cross features with complexity linear to feature dimension and order size.
arXiv Detail & Related papers (2020-03-05T13:05:27Z) - Structural Deep Clustering Network [45.370272344031285]
We propose a Structural Deep Clustering Network (SDCN) to integrate the structural information into deep clustering.
Specifically, we design a delivery operator to transfer the representations learned by autoencoder to the corresponding GCN layer.
In this way, the multiple structures of data, from low-order to high-order, are naturally combined with the multiple representations learned by autoencoder.
arXiv Detail & Related papers (2020-02-05T04:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.