Fault-Tolerant Deep Learning: A Hierarchical Perspective
- URL: http://arxiv.org/abs/2204.01942v1
- Date: Tue, 5 Apr 2022 02:31:18 GMT
- Title: Fault-Tolerant Deep Learning: A Hierarchical Perspective
- Authors: Cheng Liu, Zhen Gao, Siting Liu, Xuefei Ning, Huawei Li, Xiaowei Li
- Abstract summary: We conduct a comprehensive survey of fault-tolerant deep learning design approaches.
We investigate these approaches from model layer, architecture layer, circuit layer, and cross layer respectively.
- Score: 12.315753706063324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid advancements of deep learning in the past decade, it can be
foreseen that deep learning will be continuously deployed in more and more
safety-critical applications such as autonomous driving and robotics. In this
context, reliability turns out to be critical to the deployment of deep
learning in these applications and gradually becomes a first-class citizen
among the major design metrics like performance and energy efficiency.
Nevertheless, the back-box deep learning models combined with the diverse
underlying hardware faults make resilient deep learning extremely challenging.
In this special session, we conduct a comprehensive survey of fault-tolerant
deep learning design approaches with a hierarchical perspective and investigate
these approaches from model layer, architecture layer, circuit layer, and cross
layer respectively.
Related papers
- Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Efficient Learning of High Level Plans from Play [57.29562823883257]
We present Efficient Learning of High-Level Plans from Play (ELF-P), a framework for robotic learning that bridges motion planning and deep RL.
We demonstrate that ELF-P has significantly better sample efficiency than relevant baselines over multiple realistic manipulation tasks.
arXiv Detail & Related papers (2023-03-16T20:09:47Z) - Design Automation for Fast, Lightweight, and Effective Deep Learning
Models: A Survey [53.258091735278875]
This survey covers studies of design automation techniques for deep learning models targeting edge computing.
It offers an overview and comparison of key metrics that are used commonly to quantify the proficiency of models in terms of effectiveness, lightness, and computational costs.
The survey proceeds to cover three categories of the state-of-the-art of deep model design automation techniques.
arXiv Detail & Related papers (2022-08-22T12:12:43Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - On the combined effect of class imbalance and concept complexity in deep
learning [11.178586036657798]
This paper studies the behavior of deep learning systems in settings that have previously been deemed challenging to classical machine learning systems.
Deep architectures seem to help with structural concept complexity but not with overlap challenges in simple artificial domains.
In the real-world image domains, where overfitting is a greater concern than in the artificial domains, the advantage of deeper architectures is less obvious.
arXiv Detail & Related papers (2021-07-29T17:30:00Z) - Learning Task-aware Robust Deep Learning Systems [6.532642348343193]
A deep learning system consists of two parts: the deep learning task and the deep model.
In this paper, we adopt the binary and interval label encoding strategy to redefine the classification task.
Our method can be viewed as improving the robustness of deep learning systems from both the learning task and deep model.
arXiv Detail & Related papers (2020-10-11T01:06:49Z) - Understanding Deep Architectures with Reasoning Layer [60.90906477693774]
We show that properties of the algorithm layers, such as convergence, stability, and sensitivity, are intimately related to the approximation and generalization abilities of the end-to-end model.
Our theory can provide useful guidelines for designing deep architectures with reasoning layers.
arXiv Detail & Related papers (2020-06-24T00:26:35Z) - Learning to Stop While Learning to Predict [85.7136203122784]
Many algorithm-inspired deep models are restricted to a fixed-depth'' for all inputs.
Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances.
In this paper, we tackle this varying depth problem using a steerable architecture.
We show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks.
arXiv Detail & Related papers (2020-06-09T07:22:01Z) - Structure preserving deep learning [1.2263454117570958]
deep learning has risen to the foreground as a topic of massive interest.
There are multiple challenging mathematical problems involved in applying deep learning.
A growing effort to mathematically understand the structure in existing deep learning methods.
arXiv Detail & Related papers (2020-06-05T10:59:09Z) - Introducing Fuzzy Layers for Deep Learning [5.209583609264815]
We introduce a new layer to deep learning: the fuzzy layer.
Traditionally, the network architecture of neural networks is composed of an input layer, some combination of hidden layers, and an output layer.
We propose the introduction of fuzzy layers into the deep learning architecture to exploit the powerful aggregation properties expressed through fuzzy methodologies.
arXiv Detail & Related papers (2020-02-21T19:33:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.