Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels
- URL: http://arxiv.org/abs/2201.00232v1
- Date: Sat, 1 Jan 2022 19:00:26 GMT
- Title: Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels
- Authors: Enyan Dai, Wei jIN, Hui Liu, Suhang Wang
- Abstract summary: We study a novel problem of developing robust GNNs on noisy graphs with limited labeled nodes.
Our analysis shows that both the noisy edges and limited labeled nodes could harm the message-passing mechanism of GNNs.
We propose a novel framework which adopts the noisy edges as supervision to learn a denoised and dense graph.
- Score: 24.25945793671978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have shown their great ability in modeling graph
structured data. However, real-world graphs usually contain structure noises
and have limited labeled nodes. The performance of GNNs would drop
significantly when trained on such graphs, which hinders the adoption of GNNs
on many applications. Thus, it is important to develop noise-resistant GNNs
with limited labeled nodes. However, the work on this is rather limited.
Therefore, we study a novel problem of developing robust GNNs on noisy graphs
with limited labeled nodes. Our analysis shows that both the noisy edges and
limited labeled nodes could harm the message-passing mechanism of GNNs. To
mitigate these issues, we propose a novel framework which adopts the noisy
edges as supervision to learn a denoised and dense graph, which can down-weight
or eliminate noisy edges and facilitate message passing of GNNs to alleviate
the issue of limited labeled nodes. The generated edges are further used to
regularize the predictions of unlabeled nodes with label smoothness to better
train GNNs. Experimental results on real-world datasets demonstrate the
robustness of the proposed framework on noisy graphs with limited labeled
nodes.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - DEGNN: Dual Experts Graph Neural Network Handling Both Edge and Node Feature Noise [5.048629544493508]
Graph Neural Networks (GNNs) have achieved notable success in various applications over graph data.
Recent research has revealed that real-world graphs often contain noise, and GNNs are susceptible to noise in the graph.
We present DEGNN, a novel GNN model designed to adeptly mitigate noise in both edges and node features.
arXiv Detail & Related papers (2024-04-14T10:04:44Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Geodesic Graph Neural Network for Efficient Graph Representation
Learning [34.047527874184134]
We propose an efficient GNN framework called Geodesic GNN (GDGNN)
It injects conditional relationships between nodes into the model without labeling.
Conditioned on the geodesic representations, GDGNN is able to generate node, link, and graph representations that carry much richer structural information than plain GNNs.
arXiv Detail & Related papers (2022-10-06T02:02:35Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Dual GNNs: Graph Neural Network Learning with Limited Supervision [33.770877823910176]
We propose a novel Dual GNN learning framework to address this challenge task.
By integrating the two modules in a dual GNN learning framework, we perform joint learning in an end-to-end fashion.
arXiv Detail & Related papers (2021-06-29T23:52:25Z) - NRGNN: Learning a Label Noise-Resistant Graph Neural Network on Sparsely
and Noisily Labeled Graphs [20.470934944907608]
Graph Neural Networks (GNNs) have achieved promising results for semi-supervised learning tasks on graphs such as node classification.
Many real-world graphs are often sparsely and noisily labeled, which could significantly degrade the performance of GNNs.
We propose to develop a label noise-resistant GNN for semi-supervised node classification.
arXiv Detail & Related papers (2021-06-08T22:12:44Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Understanding Graph Neural Networks from Graph Signal Denoising
Perspectives [27.148827305359436]
Graph neural networks (GNNs) have attracted much attention because of their excellent performance on tasks such as node classification.
This paper aims to provide a theoretical framework to understand GNNs, specifically, spectral graph convolutional networks and graph attention networks.
arXiv Detail & Related papers (2020-06-08T07:10:39Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.