Transferability in Deep Learning: A Survey
- URL: http://arxiv.org/abs/2201.05867v1
- Date: Sat, 15 Jan 2022 15:03:17 GMT
- Title: Transferability in Deep Learning: A Survey
- Authors: Junguang Jiang, Yang Shu, Jianmin Wang, Mingsheng Long
- Abstract summary: The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
- Score: 80.67296873915176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of deep learning algorithms generally depends on large-scale
data, while humans appear to have inherent ability of knowledge transfer, by
recognizing and applying relevant knowledge from previous learning experiences
when encountering and solving unseen tasks. Such an ability to acquire and
reuse knowledge is known as transferability in deep learning. It has formed the
long-term quest towards making deep learning as data-efficient as human
learning, and has been motivating fruitful design of more powerful deep
learning algorithms. We present this survey to connect different isolated areas
in deep learning with their relation to transferability, and to provide a
unified and complete view to investigating transferability through the whole
lifecycle of deep learning. The survey elaborates the fundamental goals and
challenges in parallel with the core principles and methods, covering recent
cornerstones in deep architectures, pre-training, task adaptation and domain
adaptation. This highlights unanswered questions on the appropriate objectives
for learning transferable knowledge and for adapting the knowledge to new tasks
and domains, avoiding catastrophic forgetting and negative transfer. Finally,
we implement a benchmark and an open-source library, enabling a fair evaluation
of deep learning methods in terms of transferability.
Related papers
- Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Multi-Source Transfer Learning for Deep Model-Based Reinforcement
Learning [0.6445605125467572]
A crucial challenge in reinforcement learning is to reduce the number of interactions with the environment that an agent requires to master a given task.
Transfer learning proposes to address this issue by re-using knowledge from previously learned tasks.
The goal of this paper is to address these issues with modular multi-source transfer learning techniques.
arXiv Detail & Related papers (2022-05-28T12:04:52Z) - Discussion of Ensemble Learning under the Era of Deep Learning [4.061135251278187]
Ensemble deep learning has shown significant performances in improving the generalization of learning system.
Time and space overheads for training multiple base deep learners and testing with the ensemble deep learner are far greater than that of traditional ensemble learning.
An urgent problem needs to be solved is how to take the significant advantages of ensemble deep learning while reduce the required time and space overheads.
arXiv Detail & Related papers (2021-01-21T01:33:23Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z) - Learning Transferable Concepts in Deep Reinforcement Learning [0.7161783472741748]
We show that learning discrete representations of sensory inputs can provide a high-level abstraction that is common across multiple tasks.
In particular, we show that it is possible to learn such representations by self-supervision, following an information theoretic approach.
Our method is able to learn concepts in locomotive and optimal control tasks that increase the sample efficiency in both known and unknown tasks.
arXiv Detail & Related papers (2020-05-16T04:45:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.