Disentangling Transfer and Interference in Multi-Domain Learning
- URL: http://arxiv.org/abs/2107.05445v1
- Date: Fri, 2 Jul 2021 01:30:36 GMT
- Title: Disentangling Transfer and Interference in Multi-Domain Learning
- Authors: Yipeng Zhang, Tyler L. Hayes, Christopher Kanan
- Abstract summary: We study the conditions where interference and knowledge transfer occur in multi-domain learning.
We propose new metrics disentangling interference and transfer and set up experimental protocols.
We demonstrate our findings on the CIFAR-100, MiniPlaces, and Tiny-ImageNet datasets.
- Score: 53.34444188552444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans are incredibly good at transferring knowledge from one domain to
another, enabling rapid learning of new tasks. Likewise, transfer learning has
enabled enormous success in many computer vision problems using pretraining.
However, the benefits of transfer in multi-domain learning, where a network
learns multiple tasks defined by different datasets, has not been adequately
studied. Learning multiple domains could be beneficial or these domains could
interfere with each other given limited network capacity. In this work, we
decipher the conditions where interference and knowledge transfer occur in
multi-domain learning. We propose new metrics disentangling interference and
transfer and set up experimental protocols. We further examine the roles of
network capacity, task grouping, and dynamic loss weighting in reducing
interference and facilitating transfer. We demonstrate our findings on the
CIFAR-100, MiniPlaces, and Tiny-ImageNet datasets.
Related papers
- Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - A Framework for Few-Shot Policy Transfer through Observation Mapping and
Behavior Cloning [6.048526012097133]
This work proposes a framework for Few-Shot Policy Transfer between two domains through Observation Mapping and Behavior Cloning.
We use Generative Adversarial Networks (GANs) along with a cycle-consistency loss to map the observations between the source and target domains and later use this learned mapping to clone the successful source task behavior policy to the target domain.
arXiv Detail & Related papers (2023-10-13T03:15:42Z) - Learn what matters: cross-domain imitation learning with task-relevant
embeddings [77.34726150561087]
We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent.
We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge.
arXiv Detail & Related papers (2022-09-24T21:56:58Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Limits of Transfer Learning [0.0]
We show the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems.
These results build on the algorithmic search framework for machine learning, allowing the results to apply to a wide range of learning problems using transfer.
arXiv Detail & Related papers (2020-06-23T01:48:23Z) - Mutual Information Based Knowledge Transfer Under State-Action Dimension
Mismatch [14.334987432342707]
We propose a new framework for transfer learning where the teacher and the student can have arbitrarily different state- and action-spaces.
To handle this mismatch, we produce embeddings which can systematically extract knowledge from the teacher policy and value networks.
We demonstrate successful transfer learning in situations when the teacher and student have different state- and action-spaces.
arXiv Detail & Related papers (2020-06-12T09:51:17Z) - Multitask learning over graphs: An Approach for Distributed, Streaming
Machine Learning [46.613346075513206]
Multitask learning is an approach to inductive transfer learning.
Recent years have witnessed an increasing ability to collect data in a distributed and streaming manner.
This requires the design of new strategies for learning jointly multiple tasks from streaming data over distributed (or networked) systems.
arXiv Detail & Related papers (2020-01-07T15:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.