Source-Guided Similarity Preservation for Online Person
Re-Identification
- URL: http://arxiv.org/abs/2402.15206v1
- Date: Fri, 23 Feb 2024 09:07:20 GMT
- Title: Source-Guided Similarity Preservation for Online Person
Re-Identification
- Authors: Hamza Rami, Jhony H. Giraldo, Nicolas Winckler, St\'ephane
Lathuili\`ere
- Abstract summary: Online Unsupervised Domain Adaptation (OUDA) is the task of continuously adapting a model trained on a well-annotated source domain dataset to a target domain observed as a data stream.
In OUDA, person Re-ID models face two main challenges: catastrophic forgetting and domain shift.
We propose a new Source-guided Similarity Preservation (S2P) framework to alleviate these two problems.
- Score: 3.655597435084387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online Unsupervised Domain Adaptation (OUDA) for person Re-Identification
(Re-ID) is the task of continuously adapting a model trained on a
well-annotated source domain dataset to a target domain observed as a data
stream. In OUDA, person Re-ID models face two main challenges: catastrophic
forgetting and domain shift. In this work, we propose a new Source-guided
Similarity Preservation (S2P) framework to alleviate these two problems. Our
framework is based on the extraction of a support set composed of source images
that maximizes the similarity with the target data. This support set is used to
identify feature similarities that must be preserved during the learning
process. S2P can incorporate multiple existing UDA methods to mitigate
catastrophic forgetting. Our experiments show that S2P outperforms previous
state-of-the-art methods on multiple real-to-real and synthetic-to-real
challenging OUDA benchmarks.
Related papers
- DRIVE: Dual-Robustness via Information Variability and Entropic Consistency in Source-Free Unsupervised Domain Adaptation [10.127634263641877]
Adapting machine learning models to new domains without labeled data is a critical challenge in applications like medical imaging, autonomous driving, and remote sensing.
This task, known as Source-Free Unsupervised Domain Adaptation (SFUDA), involves adapting a pre-trained model to a target domain using only unlabeled target data.
Existing SFUDA methods often rely on single-model architectures, struggling with uncertainty and variability in the target domain.
We propose DRIVE, a novel SFUDA framework leveraging a dual-model architecture. The two models, with identical weights, work in parallel to capture diverse target domain characteristics.
arXiv Detail & Related papers (2024-11-24T20:35:04Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - CAusal and collaborative proxy-tasKs lEarning for Semi-Supervised Domain
Adaptation [20.589323508870592]
Semi-supervised domain adaptation (SSDA) adapts a learner to a new domain by effectively utilizing source domain data and a few labeled target samples.
We show that the proposed model significantly outperforms SOTA methods in terms of effectiveness and generalisability on SSDA datasets.
arXiv Detail & Related papers (2023-03-30T16:48:28Z) - Unsupervised Domain Adaptive Person Re-id with Local-enhance and
Prototype Dictionary Learning [0.0]
We propose Prototype Dictionary Learning for person re-ID.
It is able to utilize both source domain data and target domain data by one training stage.
It avoids the problem of class collision and the problem of updating intensity inconsistency.
arXiv Detail & Related papers (2022-01-11T06:28:32Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - Dual-Stream Reciprocal Disentanglement Learning for Domain Adaption
Person Re-Identification [44.80508095481811]
We propose a novel method named Dual-stream Reciprocal Disentanglement Learning (DRDL), which is quite efficient in learning domain-invariant features.
In DRDL, two encoders are first constructed for id-related and id-unrelated feature extractions, which are respectively measured by their associated classifiers.
Our proposed method is free from image generation, which not only reduces the computational complexity remarkably, but also removes redundant information from id-related features.
arXiv Detail & Related papers (2021-06-26T03:05:23Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Unsupervised Multi-Source Domain Adaptation for Person Re-Identification [39.817734080890695]
Unsupervised domain adaptation (UDA) methods for person re-identification (re-ID) aim at transferring re-ID knowledge from labeled source data to unlabeled target data.
We introduce the multi-source concept into UDA person re-ID field, where multiple source datasets are used during training.
The proposed method outperforms state-of-the-art UDA person re-ID methods by a large margin, and even achieves comparable performance to the supervised approaches without any post-processing techniques.
arXiv Detail & Related papers (2021-04-27T03:33:35Z) - Bilevel Online Adaptation for Out-of-Domain Human Mesh Reconstruction [94.25865526414717]
This paper considers a new problem of adapting a pre-trained model of human mesh reconstruction to out-of-domain streaming videos.
We propose Bilevel Online Adaptation, which divides the optimization process of overall multi-objective into two steps of weight probe and weight update in a training.
We demonstrate that BOA leads to state-of-the-art results on two human mesh reconstruction benchmarks.
arXiv Detail & Related papers (2021-03-30T15:47:58Z) - Structured Domain Adaptation with Online Relation Regularization for
Unsupervised Person Re-ID [62.90727103061876]
Unsupervised domain adaptation (UDA) aims at adapting the model trained on a labeled source-domain dataset to an unlabeled target-domain dataset.
We propose an end-to-end structured domain adaptation framework with an online relation-consistency regularization term.
Our proposed framework is shown to achieve state-of-the-art performance on multiple UDA tasks of person re-ID.
arXiv Detail & Related papers (2020-03-14T14:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.