Cross-Layer Optimization for Fault-Tolerant Deep Learning
- URL: http://arxiv.org/abs/2312.13754v1
- Date: Thu, 21 Dec 2023 11:35:45 GMT
- Title: Cross-Layer Optimization for Fault-Tolerant Deep Learning
- Authors: Qing Zhang, Cheng Liu, Bo Liu, Haitong Huang, Ying Wang, Huawei Li,
Xiaowei Li
- Abstract summary: We propose to characterize deep learning vulnerability difference across both neurons and bits of each neuron, and leverage the vulnerability difference to enable selective protection of the deep learning processing components.
We employ Bayesian optimization strategy to co-optimize the correlated cross-layer design parameters at algorithm layer, architecture layer, and circuit layer.
- Score: 17.724727744611535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fault-tolerant deep learning accelerator is the basis for highly reliable
deep learning processing and critical to deploy deep learning in
safety-critical applications such as avionics and robotics. Since deep learning
is known to be computing- and memory-intensive, traditional fault-tolerant
approaches based on redundant computing will incur substantial overhead
including power consumption and chip area. To this end, we propose to
characterize deep learning vulnerability difference across both neurons and
bits of each neuron, and leverage the vulnerability difference to enable
selective protection of the deep learning processing components from the
perspective of architecture layer and circuit layer respectively. At the same
time, we observe the correlation between model quantization and bit protection
overhead of the underlying processing elements of deep learning accelerators,
and propose to reduce the bit protection overhead by adding additional
quantization constrain without compromising the model accuracy. Finally, we
employ Bayesian optimization strategy to co-optimize the correlated cross-layer
design parameters at algorithm layer, architecture layer, and circuit layer to
minimize the hardware resource consumption while fulfilling multiple user
constraints including reliability, accuracy, and performance of the deep
learning processing at the same time.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.