Oldie but Goodie: Re-illuminating Label Propagation on Graphs with Partially Observed Features
- URL: http://arxiv.org/abs/2508.01209v1
- Date: Sat, 02 Aug 2025 05:50:41 GMT
- Title: Oldie but Goodie: Re-illuminating Label Propagation on Graphs with Partially Observed Features
- Authors: Sukwon Yun, Xin Liu, Yunhak Oh, Junseok Lee, Tianlong Chen, Tsuyoshi Murata, Chanyoung Park,
- Abstract summary: In real-world graphs, we often encounter missing feature situations where a few or the majority of node features are missed.<n>Despite the emergence of a few GNN-based methods attempting to mitigate its missing situation, they rather perform worse than traditional structure-based models.<n>We propose a novel framework that takes advantage of Feature Propagation, especially when only a partial feature is available.<n>Our proposed model, GOODIE, outperforms the existing state-of-the-art methods not only when only a few features are available but also in abundantly available situations.
- Score: 44.440062626322444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world graphs, we often encounter missing feature situations where a few or the majority of node features, e.g., sensitive information, are missed. In such scenarios, directly utilizing Graph Neural Networks (GNNs) would yield sub-optimal results in downstream tasks such as node classification. Despite the emergence of a few GNN-based methods attempting to mitigate its missing situation, when only a few features are available, they rather perform worse than traditional structure-based models. To this end, we propose a novel framework that further illuminates the potential of classical Label Propagation (Oldie), taking advantage of Feature Propagation, especially when only a partial feature is available. Now called by GOODIE, it takes a hybrid approach to obtain embeddings from the Label Propagation branch and Feature Propagation branch. To do so, we first design a GNN-based decoder that enables the Label Propagation branch to output hidden embeddings that align with those of the FP branch. Then, GOODIE automatically captures the significance of structure and feature information thanks to the newly designed Structure-Feature Attention. Followed by a novel Pseudo-Label contrastive learning that differentiates the contribution of each positive pair within pseudo-labels originating from the LP branch, GOODIE outputs the final prediction for the unlabeled nodes. Through extensive experiments, we demonstrate that our proposed model, GOODIE, outperforms the existing state-of-the-art methods not only when only a few features are available but also in abundantly available situations. Source code of GOODIE is available at: https://github.com/SukwonYun/GOODIE.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Multi-View Subgraph Neural Networks: Self-Supervised Learning with Scarce Labeled Data [24.628203785306233]
We present a novel learning framework called multi-view subgraph neural networks (Muse) for handling long-range dependencies.
By fusing two views of subgraphs, the learned representations can preserve the topological properties of the graph at large.
Experimental results show that Muse outperforms the alternative methods on node classification tasks with limited labeled data.
arXiv Detail & Related papers (2024-04-19T01:36:50Z) - Unifying Label-inputted Graph Neural Networks with Deep Equilibrium
Models [12.71307159013144]
This work unifies the two Graph Neural Networks (GNNs) by interpreting LGNN in the theory of Implicit GNN (IGNN)
IGNN exploits information in the entire graph to capture long-range dependencies, but with its network constrained to guarantee the existence of the equilibrium.
In this work, implicit differentiation of IGNN is introduced to differentiate its infinite-range label propagation constant memory, making the propagation both distant and adaptive.
arXiv Detail & Related papers (2022-11-19T09:28:53Z) - Learning with Few Labeled Nodes via Augmented Graph Self-Training [36.97506256446519]
A GST (Augmented Graph Self-Training) framework is built with two new (i.e., structural and semantic) augmentation modules on top of a decoupled GST backbone.
We investigate whether this novel framework can learn an effective graph predictive model with extremely limited labeled nodes.
arXiv Detail & Related papers (2022-08-26T03:36:01Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Why Propagate Alone? Parallel Use of Labels and Features on Graphs [42.01561812621306]
Graph neural networks (GNNs) and label propagation represent two interrelated modeling strategies designed to exploit graph structure in tasks such as node property prediction.
We show that a label trick can be reduced to an interpretable, deterministic training objective composed of two factors.
arXiv Detail & Related papers (2021-10-14T07:34:11Z) - GIPA: General Information Propagation Algorithm for Graph Learning [3.228614352581043]
We present a new graph attention neural network, namely GIPA, for attributed graph data learning.
GIPA consists of three key components: attention, feature propagation and aggregation.
We evaluate the performance of GIPA using the Open Graph Benchmark proteins dataset.
arXiv Detail & Related papers (2021-05-13T01:50:43Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Structural Temporal Graph Neural Networks for Anomaly Detection in
Dynamic Graphs [54.13919050090926]
We propose an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs.
In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph.
Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection.
arXiv Detail & Related papers (2020-05-15T09:17:08Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.