PyGFI: Analyzing and Enhancing Robustness of Graph Neural Networks
Against Hardware Errors
- URL: http://arxiv.org/abs/2212.03475v2
- Date: Mon, 24 Apr 2023 15:38:27 GMT
- Title: PyGFI: Analyzing and Enhancing Robustness of Graph Neural Networks
Against Hardware Errors
- Authors: Ruixuan Wang, Fred Lin, Daniel Moore, Sriram Sankar, Xun Jiao
- Abstract summary: Graph neural networks (GNNs) have emerged as a promising learning paradigm in learning graph-structured data.
This paper conducts a large-scale and empirical study of GNN resilience, aiming to understand the relationship between hardware faults and GNN accuracy.
- Score: 3.2780036095732035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have recently emerged as a promising learning
paradigm in learning graph-structured data and have demonstrated wide success
across various domains such as recommendation systems, social networks, and
electronic design automation (EDA). Like other deep learning (DL) methods, GNNs
are being deployed in sophisticated modern hardware systems, as well as
dedicated accelerators. However, despite the popularity of GNNs and the recent
efforts of bringing GNNs to hardware, the fault tolerance and resilience of
GNNs have generally been overlooked. Inspired by the inherent algorithmic
resilience of DL methods, this paper conducts, for the first time, a
large-scale and empirical study of GNN resilience, aiming to understand the
relationship between hardware faults and GNN accuracy. By developing a
customized fault injection tool on top of PyTorch, we perform extensive fault
injection experiments on various GNN models and application datasets. We
observe that the error resilience of GNN models varies by orders of magnitude
with respect to different models and application datasets. Further, we explore
a low-cost error mitigation mechanism for GNN to enhance its resilience. This
GNN resilience study aims to open up new directions and opportunities for
future GNN accelerator design and architectural optimization.
Related papers
- DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Unleash Graph Neural Networks from Heavy Tuning [33.948899558876604]
Graph Neural Networks (GNNs) are deep-learning architectures designed for graph-type data.
We propose a graph conditional latent diffusion framework (GNN-Diff) to generate high-performing GNNs directly by learning from checkpoints saved during a light-tuning coarse search.
arXiv Detail & Related papers (2024-05-21T06:23:47Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Attentional Graph Neural Networks for Robust Massive Network
Localization [20.416879207269446]
Graph neural networks (GNNs) have emerged as a prominent tool for classification tasks in machine learning.
This paper integrates GNNs with attention mechanism to tackle a challenging nonlinear regression problem: network localization.
We first introduce a novel network localization method based on graph convolutional network (GCN), which exhibits exceptional precision even under severe non-line-of-sight (NLOS) conditions.
arXiv Detail & Related papers (2023-11-28T15:05:13Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - Graph Neural Networks in Particle Physics: Implementations, Innovations,
and Challenges [7.071890461446324]
We present a range of capabilities that are currently being well-adopted in HEP communities, and which are still immature.
With the wide-spread adoption of GNNs in industry, the HEP community is well-placed to benefit from rapid improvements in GNN latency and memory usage.
We hope to capture the landscape of graph techniques in machine learning as well as point out the most significant gaps that are inhibiting potentially large leaps in research.
arXiv Detail & Related papers (2022-03-23T04:36:04Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - IGNNITION: Bridging the Gap Between Graph Neural Networks and Networking
Systems [4.1591055164123665]
We present IGNNITION, a novel open-source framework that enables fast prototyping of Graph Neural Networks (GNNs) for networking systems.
IGNNITION is based on an intuitive high-level abstraction that hides the complexity behind GNNs.
Our results show that the GNN models produced by IGNNITION are equivalent in terms of accuracy and performance to their native implementations.
arXiv Detail & Related papers (2021-09-14T14:28:21Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.