FTT-NAS: Discovering Fault-Tolerant Convolutional Neural Architecture
- URL: http://arxiv.org/abs/2003.10375v2
- Date: Mon, 12 Apr 2021 16:15:18 GMT
- Title: FTT-NAS: Discovering Fault-Tolerant Convolutional Neural Architecture
- Authors: Xuefei Ning, Guangjun Ge, Wenshuo Li, Zhenhua Zhu, Yin Zheng, Xiaoming
Chen, Zhen Gao, Yu Wang and Huazhong Yang
- Abstract summary: We propose Fault-Tolerant Neural Architecture Search (FT-NAS) to automatically discover convolutional neural network (CNN) architectures reliable to various faults in nowadays devices.
Experiments on CIFAR-10 show that the discovered architectures outperform other manually designed baseline architectures significantly.
- Score: 19.91033746155525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the fast evolvement of embedded deep-learning computing systems,
applications powered by deep learning are moving from the cloud to the edge.
When deploying neural networks (NNs) onto the devices under complex
environments, there are various types of possible faults: soft errors caused by
cosmic radiation and radioactive impurities, voltage instability, aging,
temperature variations, and malicious attackers. Thus the safety risk of
deploying NNs is now drawing much attention. In this paper, after the analysis
of the possible faults in various types of NN accelerators, we formalize and
implement various fault models from the algorithmic perspective. We propose
Fault-Tolerant Neural Architecture Search (FT-NAS) to automatically discover
convolutional neural network (CNN) architectures that are reliable to various
faults in nowadays devices. Then we incorporate fault-tolerant training (FTT)
in the search process to achieve better results, which is referred to as
FTT-NAS. Experiments on CIFAR-10 show that the discovered architectures
outperform other manually designed baseline architectures significantly, with
comparable or fewer floating-point operations (FLOPs) and parameters.
Specifically, with the same fault settings, F-FTT-Net discovered under the
feature fault model achieves an accuracy of 86.2% (VS. 68.1% achieved by
MobileNet-V2), and W-FTT-Net discovered under the weight fault model achieves
an accuracy of 69.6% (VS. 60.8% achieved by ResNet-20). By inspecting the
discovered architectures, we find that the operation primitives, the weight
quantization range, the capacity of the model, and the connection pattern have
influences on the fault resilience capability of NN models.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - NAS-ASDet: An Adaptive Design Method for Surface Defect Detection
Network using Neural Architecture Search [5.640706784987607]
We propose a new method called NAS-ASDet to adaptively design network for surface defect detection.
First, a refined and industry-appropriate search space that can adaptively adjust the feature distribution is designed.
Then, a progressive search strategy with a deep supervision mechanism is used to explore the search space faster and better.
arXiv Detail & Related papers (2023-11-18T03:15:45Z) - Improving Reliability of Spiking Neural Networks through Fault Aware
Threshold Voltage Optimization [0.0]
Spiking neural networks (SNNs) have made breakthroughs in computer vision by lending themselves to neuromorphic hardware.
Systolic-array SNN accelerators (systolicSNNs) have been proposed recently, but their reliability is still a major concern.
We present a novel fault mitigation method, i.e., fault-aware threshold voltage optimization in retraining (FalVolt)
arXiv Detail & Related papers (2023-01-12T19:30:21Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - enpheeph: A Fault Injection Framework for Spiking and Compressed Deep
Neural Networks [10.757663798809144]
We present enpheeph, a Fault Injection Framework for Spiking and Compressed Deep Neural Networks (DNNs)
By injecting a random and increasing number of faults, we show that DNNs can show a reduction in accuracy with a fault rate as low as 7 x 10 (-7) faults per parameter, with an accuracy drop higher than 40%.
arXiv Detail & Related papers (2022-07-31T00:30:59Z) - Fast and Accurate Error Simulation for CNNs against Soft Errors [64.54260986994163]
We present a framework for the reliability analysis of Conal Neural Networks (CNNs) via an error simulation engine.
These error models are defined based on the corruption patterns of the output of the CNN operators induced by faults.
We show that our methodology achieves about 99% accuracy of the fault effects w.r.t. SASSIFI, and a speedup ranging from 44x up to 63x w.r.t.FI, that only implements a limited set of error models.
arXiv Detail & Related papers (2022-06-04T19:45:02Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Neural Architecture Search For LF-MMI Trained Time Delay Neural Networks [61.76338096980383]
A range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper- parameters of state-of-the-art factored time delay neural networks (TDNNs)
These include the DARTS method integrating architecture selection with lattice-free MMI (LF-MMI) TDNN training.
Experiments conducted on a 300-hour Switchboard corpus suggest the auto-configured systems consistently outperform the baseline LF-MMI TDNN systems.
arXiv Detail & Related papers (2020-07-17T08:32:11Z) - A Survey on Impact of Transient Faults on BNN Inference Accelerators [0.9667631210393929]
Big data booming enables us to easily access and analyze the highly large data sets.
Deep learning models require significant computation power and extremely high memory accesses.
In this study, we demonstrate that the impact of soft errors on a customized deep learning algorithm might cause drastic image misclassification.
arXiv Detail & Related papers (2020-04-10T16:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.