EDN: A Novel Edge-Dependent Noise Model for Graph Data
- URL: http://arxiv.org/abs/2506.11368v1
- Date: Fri, 13 Jun 2025 00:05:00 GMT
- Title: EDN: A Novel Edge-Dependent Noise Model for Graph Data
- Authors: Pintu Kumar, Nandyala Hemachandra,
- Abstract summary: Edge-Dependent Noise (EDN) addresses the limitation of node label noise models.<n>EDN posits that in real-world scenarios, label noise may be influenced by the connections between nodes.<n>We show that 2 variants of EDN lead to greater performance degradation in both Graph Neural Networks (GNNs) and existing noise-robust algorithms.
- Score: 0.2812395851874055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important structural feature of a graph is its set of edges, as it captures the relationships among the nodes (the graph's topology). Existing node label noise models like Symmetric Label Noise (SLN) and Class Conditional Noise (CCN) disregard this important node relationship in graph data; and the Edge-Dependent Noise (EDN) model addresses this limitation. EDN posits that in real-world scenarios, label noise may be influenced by the connections between nodes. We explore three variants of EDN. A crucial notion that relates nodes and edges in a graph is the degree of a node; we show that in all three variants, the probability of a node's label corruption is dependent on its degree. Additionally, we compare the dependence of these probabilities on node degree across different variants. We performed experiments on popular graph datasets using 5 different GNN architectures and 8 noise robust algorithms for graph data. The results demonstrate that 2 variants of EDN lead to greater performance degradation in both Graph Neural Networks (GNNs) and existing noise-robust algorithms, as compared to traditional node label noise models. We statistically verify this by posing a suitable hypothesis-testing problem. This emphasizes the importance of incorporating EDN when evaluating noise robust algorithms for graphs, to enhance the reliability of graph-based learning in noisy environments.
Related papers
- Training Robust Graph Neural Networks by Modeling Noise Dependencies [28.1151026795484]
In real-world applications, node features in graphs often contain noise from various sources, leading to significant performance degradation.<n>We introduce a more realistic noise scenario, dependency-aware noise on graphs (DANG), where noise in node features create a chain of noise dependencies that propagates to the graph structure and node labels.<n>We propose a novel robust GNN, DA-GNN, which captures the causal relationships among variables in the data generating process (DGP) of DANG using variational inference.
arXiv Detail & Related papers (2025-02-27T01:30:13Z) - Improving Graph Neural Networks via Adversarial Robustness Evaluation [2.1937382384136637]
Graph Neural Networks (GNNs) are one of the most powerful types of neural network architectures.<n>However, GNNs are vulnerable to noise in the graph structure.<n>In this paper, we propose using adversarial robustness evaluation to select a small subset of robust nodes that are less affected by noise.
arXiv Detail & Related papers (2024-12-14T14:47:20Z) - Joint Graph Rewiring and Feature Denoising via Spectral Resonance [10.850726111343063]
We propose an algorithm to jointlynoise and rewire the noisy graph.<n>We show that it consistently outperforms existing methods on a range of synthetic and realworld node classification tasks.
arXiv Detail & Related papers (2024-08-13T20:16:11Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - DEGNN: Dual Experts Graph Neural Network Handling Both Edge and Node Feature Noise [5.048629544493508]
Graph Neural Networks (GNNs) have achieved notable success in various applications over graph data.
Recent research has revealed that real-world graphs often contain noise, and GNNs are susceptible to noise in the graph.
We present DEGNN, a novel GNN model designed to adeptly mitigate noise in both edges and node features.
arXiv Detail & Related papers (2024-04-14T10:04:44Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - Neighborhood Random Walk Graph Sampling for Regularized Bayesian Graph
Convolutional Neural Networks [0.6236890292833384]
In this paper, we propose a novel algorithm called Bayesian Graph Convolutional Network using Neighborhood Random Walk Sampling (BGCN-NRWS)
BGCN-NRWS uses a Markov Chain Monte Carlo (MCMC) based graph sampling algorithm utilizing graph structure, reduces overfitting by using a variational inference layer, and yields consistently competitive classification results compared to the state-of-the-art in semi-supervised node classification.
arXiv Detail & Related papers (2021-12-14T20:58:27Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Cold Brew: Distilling Graph Node Representations with Incomplete or
Missing Neighborhoods [69.13371028670153]
We introduce feature-contribution ratio (FCR) to study the viability of using inductive GNNs to solve the Strict Cold Start (SCS) problem.
We experimentally show FCR disentangles the contributions of various components of graph datasets and demonstrate the superior performance of Cold Brew.
arXiv Detail & Related papers (2021-11-08T21:29:25Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Alleviating the Inconsistency Problem of Applying Graph Neural Network
to Fraud Detection [78.88163190021798]
We introduce a new GNN framework, $mathsfGraphConsis$, to tackle the inconsistency problem.
Empirical analysis on four datasets indicates the inconsistency problem is crucial in a fraud detection task.
We also released a GNN-based fraud detection toolbox with implementations of SOTA models.
arXiv Detail & Related papers (2020-05-01T21:43:58Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.