Knowledge Distillation for Multi-Target Domain Adaptation in Real-Time
Person Re-Identification
- URL: http://arxiv.org/abs/2205.06237v1
- Date: Thu, 12 May 2022 17:28:02 GMT
- Title: Knowledge Distillation for Multi-Target Domain Adaptation in Real-Time
Person Re-Identification
- Authors: F\'elix Remigereau, Djebril Mekhazni, Sajjad Abdoli, Le Thanh
Nguyen-Meidine, Rafael M. O. Cruz and Eric Granger
- Abstract summary: Multi-target domain adaptation (MTDA) has not been widely addressed in the ReID literature.
We introduce a new MTDA method based on knowledge distillation (KD-ReID) that is suitable for real-time person ReID applications.
Our method adapts a common lightweight student backbone CNN over the target domains by alternatively distilling from multiple specialized teacher CNNs, each one adapted on data from a specific target domain.
- Score: 10.672152844970151
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the recent success of deep learning architectures, person
re-identification (ReID) remains a challenging problem in real-word
applications. Several unsupervised single-target domain adaptation (STDA)
methods have recently been proposed to limit the decline in ReID accuracy
caused by the domain shift that typically occurs between source and target
video data. Given the multimodal nature of person ReID data (due to variations
across camera viewpoints and capture conditions), training a common CNN
backbone to address domain shifts across multiple target domains, can provide
an efficient solution for real-time ReID applications. Although multi-target
domain adaptation (MTDA) has not been widely addressed in the ReID literature,
a straightforward approach consists in blending different target datasets, and
performing STDA on the mixture to train a common CNN. However, this approach
may lead to poor generalization, especially when blending a growing number of
distinct target domains to train a smaller CNN.
To alleviate this problem, we introduce a new MTDA method based on knowledge
distillation (KD-ReID) that is suitable for real-time person ReID applications.
Our method adapts a common lightweight student backbone CNN over the target
domains by alternatively distilling from multiple specialized teacher CNNs,
each one adapted on data from a specific target domain. Extensive experiments
conducted on several challenging person ReID datasets indicate that our
approach outperforms state-of-art methods for MTDA, including blending methods,
particularly when training a compact CNN backbone like OSNet. Results suggest
that our flexible MTDA approach can be employed to design cost-effective ReID
systems for real-time video surveillance applications.
Related papers
- Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Knowledge Distillation Methods for Efficient Unsupervised Adaptation
Across Multiple Domains [13.464493273131591]
We propose a progressive KD approach for unsupervised single-target DA (STDA) and multi-target DA (MTDA) of CNNs.
Our proposed approach is compared against state-of-the-art methods for compression and STDA of CNNs on the Office31 and ImageClef-DA image classification datasets.
arXiv Detail & Related papers (2021-01-18T19:53:16Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Unsupervised Multi-Target Domain Adaptation Through Knowledge
Distillation [14.088776449829345]
Unsupervised domain adaptation (UDA) seeks to alleviate the problem of domain shift between the distribution of unlabeled data.
In this paper, we propose a novel unsupervised MTDA approach to train a CNN that can generalize well across multiple target domains.
arXiv Detail & Related papers (2020-07-14T14:59:45Z) - Joint Progressive Knowledge Distillation and Unsupervised Domain
Adaptation [15.115086812609182]
We propose an unexplored direction -- the joint optimization of CNNs to provide a compressed model that is adapted to perform well for a given target domain.
Our method is compared against state-of-the-art compression and UDA techniques, using two popular classification datasets for UDA -- Office31 and ImageClef-DA.
arXiv Detail & Related papers (2020-05-16T01:07:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.