Unsupervised Multi-Target Domain Adaptation Through Knowledge
Distillation
- URL: http://arxiv.org/abs/2007.07077v4
- Date: Thu, 19 Nov 2020 20:07:22 GMT
- Title: Unsupervised Multi-Target Domain Adaptation Through Knowledge
Distillation
- Authors: Le Thanh Nguyen-Meidine, Atif Belal, Madhu Kiran, Jose Dolz,
Louis-Antoine Blais-Morin, Eric Granger
- Abstract summary: Unsupervised domain adaptation (UDA) seeks to alleviate the problem of domain shift between the distribution of unlabeled data.
In this paper, we propose a novel unsupervised MTDA approach to train a CNN that can generalize well across multiple target domains.
- Score: 14.088776449829345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) seeks to alleviate the problem of domain
shift between the distribution of unlabeled data from the target domain w.r.t.
labeled data from the source domain. While the single-target UDA scenario is
well studied in the literature, Multi-Target Domain Adaptation (MTDA) remains
largely unexplored despite its practical importance, e.g., in multi-camera
video-surveillance applications. The MTDA problem can be addressed by adapting
one specialized model per target domain, although this solution is too costly
in many real-world applications. Blending multiple targets for MTDA has been
proposed, yet this solution may lead to a reduction in model specificity and
accuracy. In this paper, we propose a novel unsupervised MTDA approach to train
a CNN that can generalize well across multiple target domains. Our
Multi-Teacher MTDA (MT-MTDA) method relies on multi-teacher knowledge
distillation (KD) to iteratively distill target domain knowledge from multiple
teachers to a common student. The KD process is performed in a progressive
manner, where the student is trained by each teacher on how to perform UDA for
a specific target, instead of directly learning domain adapted features.
Finally, instead of combining the knowledge from each teacher, MT-MTDA
alternates between teachers that distill knowledge, thereby preserving the
specificity of each target (teacher) when learning to adapt to the student.
MT-MTDA is compared against state-of-the-art methods on several challenging UDA
benchmarks, and empirical results show that our proposed model can provide a
considerably higher level of accuracy across multiple target domains. Our code
is available at: https://github.com/LIVIAETS/MT-MTDA
Related papers
- More is Better: Deep Domain Adaptation with Multiple Sources [34.26271755493111]
Multi-source domain adaptation (MDA) is a powerful and practical extension in which the labeled data may be collected from multiple sources with different distributions.
In this survey, we first define various MDA strategies. Then we systematically summarize and compare modern MDA methods in the deep learning era from different perspectives.
arXiv Detail & Related papers (2024-05-01T03:37:12Z) - OurDB: Ouroboric Domain Bridging for Multi-Target Domain Adaptive Semantic Segmentation [8.450397069717727]
Multi-target domain adaptation (MTDA) for semantic segmentation poses a significant challenge, as it involves multiple target domains with varying distributions.
Previous MTDA approaches typically employ multiple teacher architectures, where each teacher specializes in one target domain to simplify the task.
We propose an ouroboric domain bridging (OurDB) framework, offering an efficient solution to the MTDA problem using a single teacher architecture.
arXiv Detail & Related papers (2024-03-18T08:55:48Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - Knowledge Distillation for Multi-Target Domain Adaptation in Real-Time
Person Re-Identification [10.672152844970151]
Multi-target domain adaptation (MTDA) has not been widely addressed in the ReID literature.
We introduce a new MTDA method based on knowledge distillation (KD-ReID) that is suitable for real-time person ReID applications.
Our method adapts a common lightweight student backbone CNN over the target domains by alternatively distilling from multiple specialized teacher CNNs, each one adapted on data from a specific target domain.
arXiv Detail & Related papers (2022-05-12T17:28:02Z) - Multi-Head Distillation for Continual Unsupervised Domain Adaptation in
Semantic Segmentation [38.10483890861357]
This work focuses on a novel framework for learning UDA, continuous UDA, in which models operate on multiple target domains discovered sequentially.
We propose MuHDi, for Multi-Head Distillation, a method that solves the catastrophic forgetting problem, inherent in continual learning tasks.
arXiv Detail & Related papers (2022-04-25T14:03:09Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.