A Systematic Literature Review on Hardware Reliability Assessment
Methods for Deep Neural Networks
- URL: http://arxiv.org/abs/2305.05750v1
- Date: Tue, 9 May 2023 20:08:30 GMT
- Title: A Systematic Literature Review on Hardware Reliability Assessment
Methods for Deep Neural Networks
- Authors: Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud
Daneshtalab, Maksim Jenihhin
- Abstract summary: The reliability of Deep Neural Networks (DNNs) is an essential subject of research.
In recent years, several studies have been published accordingly to assess the reliability of DNNs.
In this work, we conduct a Systematic Literature Review (SLR) on the reliability assessment methods of DNNs.
- Score: 1.189955933770711
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Intelligence (AI) and, in particular, Machine Learning (ML) have
emerged to be utilized in various applications due to their capability to learn
how to solve complex problems. Over the last decade, rapid advances in ML have
presented Deep Neural Networks (DNNs) consisting of a large number of neurons
and layers. DNN Hardware Accelerators (DHAs) are leveraged to deploy DNNs in
the target applications. Safety-critical applications, where hardware
faults/errors would result in catastrophic consequences, also benefit from
DHAs. Therefore, the reliability of DNNs is an essential subject of research.
In recent years, several studies have been published accordingly to assess the
reliability of DNNs. In this regard, various reliability assessment methods
have been proposed on a variety of platforms and applications. Hence, there is
a need to summarize the state of the art to identify the gaps in the study of
the reliability of DNNs. In this work, we conduct a Systematic Literature
Review (SLR) on the reliability assessment methods of DNNs to collect relevant
research works as much as possible, present a categorization of them, and
address the open challenges. Through this SLR, three kinds of methods for
reliability assessment of DNNs are identified including Fault Injection (FI),
Analytical, and Hybrid methods. Since the majority of works assess the DNN
reliability by FI, we characterize different approaches and platforms of the FI
method comprehensively. Moreover, Analytical and Hybrid methods are propounded.
Thus, different reliability assessment methods for DNNs have been elaborated on
their conducted DNN platforms and reliability evaluation metrics. Finally, we
highlight the advantages and disadvantages of the identified methods and
address the open challenges in the research area.
Related papers
- Unveiling and Mitigating Generalized Biases of DNNs through the Intrinsic Dimensions of Perceptual Manifolds [46.47992213722412]
Building fair deep neural networks (DNNs) is a crucial step towards achieving trustworthy artificial intelligence.
We propose Intrinsic Dimension Regularization (IDR), which enhances the fairness and performance of models.
In various image recognition benchmark tests, IDR significantly mitigates model bias while improving its performance.
arXiv Detail & Related papers (2024-04-22T04:16:40Z) - DeepKnowledge: Generalisation-Driven Deep Learning Testing [2.526146573337397]
DeepKnowledge is a systematic testing methodology for DNN-based systems.
It aims to enhance robustness and reduce the residual risk of 'black box' models.
We report improvements of up to 10 percentage points over state-of-the-art coverage criteria for detecting adversarial attacks.
arXiv Detail & Related papers (2024-03-25T13:46:09Z) - A Survey of Graph Neural Networks in Real world: Imbalance, Noise,
Privacy and OOD Challenges [75.37448213291668]
This paper systematically reviews existing Graph Neural Networks (GNNs)
We first highlight the four key challenges faced by existing GNNs, paving the way for our exploration of real-world GNN models.
arXiv Detail & Related papers (2024-03-07T13:10:37Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability
Assessment [1.189955933770711]
Deep Neural Networks (DNNs) and their accelerators are being deployed more frequently in safety-critical applications.
We propose a novel accurate, fine-grain, metric-oriented, and accelerator-agnostic method called DeepVigor.
arXiv Detail & Related papers (2023-03-13T08:55:10Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - gRoMA: a Tool for Measuring the Global Robustness of Deep Neural
Networks [3.2228025627337864]
Deep neural networks (DNNs) are at the forefront of cutting-edge technology, and have been achieving remarkable performance in a variety of complex tasks.
Their integration into safety-critical systems, such as in the aerospace or automotive domains, poses a significant challenge due to the threat of adversarial inputs.
Here, we present gRoMA, an innovative and scalable tool that implements a probabilistic approach to measure the global categorial robustness of a DNN.
arXiv Detail & Related papers (2023-01-05T20:45:23Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.