Aggregation Buffer: Revisiting DropEdge with a New Parameter Block
- URL: http://arxiv.org/abs/2505.20840v1
- Date: Tue, 27 May 2025 07:59:17 GMT
- Title: Aggregation Buffer: Revisiting DropEdge with a New Parameter Block
- Authors: Dooho Lee, Myeong Kong, Sagad Hamid, Cheonwoo Lee, Jaemin Yoo,
- Abstract summary: We revisit DropEdge, a data augmentation technique for GNNs which randomly removes edges to expose diverse graph structures during training.<n>We provide a theoretical analysis showing that the limited performance of DropEdge comes from the fundamental limitation that exists in many GNN architectures.<n>Our method is compatible with any GNN model, and shows consistent performance improvements on multiple datasets.
- Score: 10.437971325118731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit DropEdge, a data augmentation technique for GNNs which randomly removes edges to expose diverse graph structures during training. While being a promising approach to effectively reduce overfitting on specific connections in the graph, we observe that its potential performance gain in supervised learning tasks is significantly limited. To understand why, we provide a theoretical analysis showing that the limited performance of DropEdge comes from the fundamental limitation that exists in many GNN architectures. Based on this analysis, we propose Aggregation Buffer, a parameter block specifically designed to improve the robustness of GNNs by addressing the limitation of DropEdge. Our method is compatible with any GNN model, and shows consistent performance improvements on multiple datasets. Moreover, our method effectively addresses well-known problems such as degree bias or structural disparity as a unifying solution. Code and datasets are available at https://github.com/dooho00/agg-buffer.
Related papers
- Effects of Random Edge-Dropping on Over-Squashing in Graph Neural Networks [8.524684315458243]
We present theoretical results that characterize the negative effects of DropEdge on sensitivity between distant nodes.<n>Our findings are easily extended to its variants, allowing us to build a comprehensive understanding of how they affect over-squashing.<n>Our conclusions highlight the need to re-evaluate various methods designed for training deep GNNs.
arXiv Detail & Related papers (2025-02-11T08:36:38Z) - Why Does Dropping Edges Usually Outperform Adding Edges in Graph Contrastive Learning? [54.44813218411879]
We introduce a new metric, namely Error Passing Rate (EPR), to quantify how a graph fits the network.<n>Inspired by the theoretical conclusions and the idea of positive-incentive noise, we propose a novel GCL algorithm, Error-PAssing-based Graph Contrastive Learning (EPAGCL)<n>We generate views by adding and dropping edges based on the weights derived from EPR.
arXiv Detail & Related papers (2024-12-11T06:31:06Z) - ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks [53.41164429486268]
Graph Neural Networks (GNNs) have exhibited the powerful ability to gather graph-structured information from neighborhood nodes.
The performance of GNNs is limited by poor generalization and fragile robustness caused by noisy and redundant graph data.
We propose a novel adversarial edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor guiding the removal of edges.
arXiv Detail & Related papers (2024-03-14T08:31:39Z) - Revisiting Edge Perturbation for Graph Neural Network in Graph Data
Augmentation and Attack [58.440711902319855]
Edge perturbation is a method to modify graph structures.
It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs)
We propose a unified formulation and establish a clear boundary between two categories of edge perturbation methods.
arXiv Detail & Related papers (2024-03-10T15:50:04Z) - Structure-Aware DropEdge Towards Deep Graph Convolutional Networks [83.38709956935095]
Graph Convolutional Networks (GCNs) encounter a remarkable drop in performance when multiple layers are piled up.
Over-smoothing isolates the network output from the input with the increase of network depth, weakening expressivity and trainability.
We investigate refined measures upon DropEdge -- an existing simple yet effective technique to relieve over-smoothing.
arXiv Detail & Related papers (2023-06-21T08:11:40Z) - Are All Edges Necessary? A Unified Framework for Graph Purification [6.795209119198288]
Not all edges in a graph are necessary for the training of machine learning models.
In this paper, we try to provide a method to drop edges in order to purify the graph data from a new perspective.
arXiv Detail & Related papers (2022-11-09T20:28:25Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - Learning heterophilious edge to drop: A general framework for boosting
graph neural networks [19.004710957882402]
This work aims at mitigating the negative impacts of heterophily by optimizing graph structure for the first time.
We propose a structure learning method called LHE to identify heterophilious edges to drop.
Experiments demonstrate the remarkable performance improvement of GNNs with emphLHE on multiple datasets across full spectrum of homophily level.
arXiv Detail & Related papers (2022-05-23T14:07:29Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.