Learning Task-aware Robust Deep Learning Systems
- URL: http://arxiv.org/abs/2010.05125v2
- Date: Thu, 2 Dec 2021 02:39:50 GMT
- Title: Learning Task-aware Robust Deep Learning Systems
- Authors: Keji Han, Yun Li, Xianzhong Long, Yao Ge
- Abstract summary: A deep learning system consists of two parts: the deep learning task and the deep model.
In this paper, we adopt the binary and interval label encoding strategy to redefine the classification task.
Our method can be viewed as improving the robustness of deep learning systems from both the learning task and deep model.
- Score: 6.532642348343193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many works demonstrate that deep learning system is vulnerable to adversarial
attack. A deep learning system consists of two parts: the deep learning task
and the deep model. Nowadays, most existing works investigate the impact of the
deep model on robustness of deep learning systems, ignoring the impact of the
learning task. In this paper, we adopt the binary and interval label encoding
strategy to redefine the classification task and design corresponding loss to
improve robustness of the deep learning system. Our method can be viewed as
improving the robustness of deep learning systems from both the learning task
and deep model. Experimental results demonstrate that our learning task-aware
method is much more robust than traditional classification while retaining the
accuracy.
Related papers
- Accelerating Deep Learning with Fixed Time Budget [2.190627491782159]
This paper proposes an effective technique for training arbitrary deep learning models within fixed time constraints.
The proposed method is extensively evaluated in both classification and regression tasks in computer vision.
arXiv Detail & Related papers (2024-10-03T21:18:04Z) - Meta-Learning Loss Functions for Deep Neural Networks [2.4258031099152735]
This thesis explores the concept of meta-learning to improve performance, through the often-overlooked component of the loss function.
The loss function is a vital component of a learning system, as it represents the primary learning objective, where success is determined and quantified by the system's ability to optimize for that objective successfully.
arXiv Detail & Related papers (2024-06-14T04:46:14Z) - Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid [39.58317527488534]
We present the work in progress towards a hybrid agent architecture that combines model-based Deep Reinforcement Learning with imitation learning to overcome both problems.
In this paper, we present the work in progress towards a hybrid agent architecture that combines model-based Deep Reinforcement Learning with imitation learning to overcome both problems.
arXiv Detail & Related papers (2024-04-02T09:55:30Z) - Continual Learning, Fast and Slow [75.53144246169346]
According to the Complementary Learning Systems theory, humans do effective emphcontinual learning through two complementary systems.
We propose emphDualNets (for Dual Networks), a general continual learning framework comprising a fast learning system for supervised learning of specific tasks and a slow learning system for representation learning of task-agnostic general representation via Self-Supervised Learning (SSL)
We demonstrate the promising results of DualNets on a wide range of continual learning protocols, ranging from the standard offline, task-aware setting to the challenging online, task-free scenario.
arXiv Detail & Related papers (2022-09-06T10:48:45Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Fault-Tolerant Deep Learning: A Hierarchical Perspective [12.315753706063324]
We conduct a comprehensive survey of fault-tolerant deep learning design approaches.
We investigate these approaches from model layer, architecture layer, circuit layer, and cross layer respectively.
arXiv Detail & Related papers (2022-04-05T02:31:18Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Learning to Stop While Learning to Predict [85.7136203122784]
Many algorithm-inspired deep models are restricted to a fixed-depth'' for all inputs.
Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances.
In this paper, we tackle this varying depth problem using a steerable architecture.
We show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks.
arXiv Detail & Related papers (2020-06-09T07:22:01Z) - Structure preserving deep learning [1.2263454117570958]
deep learning has risen to the foreground as a topic of massive interest.
There are multiple challenging mathematical problems involved in applying deep learning.
A growing effort to mathematically understand the structure in existing deep learning methods.
arXiv Detail & Related papers (2020-06-05T10:59:09Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.