Multi-Head Distillation for Continual Unsupervised Domain Adaptation in
Semantic Segmentation
- URL: http://arxiv.org/abs/2204.11667v1
- Date: Mon, 25 Apr 2022 14:03:09 GMT
- Title: Multi-Head Distillation for Continual Unsupervised Domain Adaptation in
Semantic Segmentation
- Authors: Antoine Saporta and Arthur Douillard and Tuan-Hung Vu and Patrick
P\'erez and Matthieu Cord
- Abstract summary: This work focuses on a novel framework for learning UDA, continuous UDA, in which models operate on multiple target domains discovered sequentially.
We propose MuHDi, for Multi-Head Distillation, a method that solves the catastrophic forgetting problem, inherent in continual learning tasks.
- Score: 38.10483890861357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) is a transfer learning task which aims
at training on an unlabeled target domain by leveraging a labeled source
domain. Beyond the traditional scope of UDA with a single source domain and a
single target domain, real-world perception systems face a variety of scenarios
to handle, from varying lighting conditions to many cities around the world. In
this context, UDAs with several domains increase the challenges with the
addition of distribution shifts within the different target domains. This work
focuses on a novel framework for learning UDA, continuous UDA, in which models
operate on multiple target domains discovered sequentially, without access to
previous target domains. We propose MuHDi, for Multi-Head Distillation, a
method that solves the catastrophic forgetting problem, inherent in continual
learning tasks. MuHDi performs distillation at multiple levels from the
previous model as well as an auxiliary target-specialist segmentation head. We
report both extensive ablation and experiments on challenging multi-target UDA
semantic segmentation benchmarks to validate the proposed learning scheme and
architecture.
Related papers
- OurDB: Ouroboric Domain Bridging for Multi-Target Domain Adaptive Semantic Segmentation [8.450397069717727]
Multi-target domain adaptation (MTDA) for semantic segmentation poses a significant challenge, as it involves multiple target domains with varying distributions.
Previous MTDA approaches typically employ multiple teacher architectures, where each teacher specializes in one target domain to simplify the task.
We propose an ouroboric domain bridging (OurDB) framework, offering an efficient solution to the MTDA problem using a single teacher architecture.
arXiv Detail & Related papers (2024-03-18T08:55:48Z) - Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Student Become Decathlon Master in Retinal Vessel Segmentation via
Dual-teacher Multi-target Domain Adaptation [1.121358474059223]
We propose RVms, a novel unsupervised multi-target domain adaptation approach to segment retinal vessels (RVs) from multimodal and multicenter retinal images.
RVms is found to be very close to the target-trained Oracle in terms of segmenting the RVs, largely outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-03-07T02:20:14Z) - Multi-Source Unsupervised Domain Adaptation via Pseudo Target Domain [0.0]
Multi-source domain adaptation (MDA) aims to transfer knowledge from multiple source domains to an unlabeled target domain.
We propose a novel MDA approach, termed Pseudo Target for MDA (PTMDA)
PTMDA maps each group of source and target domains into a group-specific subspace using adversarial learning with a metric constraint.
We show that PTMDA as a whole can reduce the target error bound and leads to a better approximation of the target risk in MDA settings.
arXiv Detail & Related papers (2022-02-22T08:37:16Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation [91.30558794056056]
Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently.
We present a novel framework based on three main design principles: discover, hallucinate, and adapt.
We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
arXiv Detail & Related papers (2021-10-08T13:20:09Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.