Domain Generalized Person Re-Identification via Cross-Domain Episodic
Learning
- URL: http://arxiv.org/abs/2010.09561v1
- Date: Mon, 19 Oct 2020 14:42:29 GMT
- Title: Domain Generalized Person Re-Identification via Cross-Domain Episodic
Learning
- Authors: Ci-Siang Lin, Yuan-Chia Cheng, Yu-Chiang Frank Wang
- Abstract summary: We present an episodic learning scheme which advances meta learning strategies to exploit the observed source-domain labeled data.
Our experiments on four benchmark datasets confirm the superiority of our method over the state-of-the-arts.
- Score: 31.17248105464821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming at recognizing images of the same person across distinct camera views,
person re-identification (re-ID) has been among active research topics in
computer vision. Most existing re-ID works require collection of a large amount
of labeled image data from the scenes of interest. When the data to be
recognized are different from the source-domain training ones, a number of
domain adaptation approaches have been proposed. Nevertheless, one still needs
to collect labeled or unlabelled target-domain data during training. In this
paper, we tackle an even more challenging and practical setting, domain
generalized (DG) person re-ID. That is, while a number of labeled source-domain
datasets are available, we do not have access to any target-domain training
data. In order to learn domain-invariant features without knowing the target
domain of interest, we present an episodic learning scheme which advances meta
learning strategies to exploit the observed source-domain labeled data. The
learned features would exhibit sufficient domain-invariant properties while not
overfitting the source-domain data or ID labels. Our experiments on four
benchmark datasets confirm the superiority of our method over the
state-of-the-arts.
Related papers
- SiamSeg: Self-Training with Contrastive Learning for Unsupervised Domain Adaptation Semantic Segmentation in Remote Sensing [14.007392647145448]
UDA enables models to learn from unlabeled target domain data while training on labeled source domain data.
We propose integrating contrastive learning into UDA, enhancing the model's capacity to capture semantic information.
Our SimSeg method outperforms existing approaches, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-10-17T11:59:39Z) - WIDIn: Wording Image for Domain-Invariant Representation in Single-Source Domain Generalization [63.98650220772378]
We present WIDIn, Wording Images for Domain-Invariant representation, to disentangle discriminative visual representation.
We first estimate the language embedding with fine-grained alignment, which can be used to adaptively identify and then remove domain-specific counterpart.
We show that WIDIn can be applied to both pretrained vision-language models like CLIP, and separately trained uni-modal models like MoCo and BERT.
arXiv Detail & Related papers (2024-05-28T17:46:27Z) - CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - A Survey of Unsupervised Domain Adaptation for Visual Recognition [2.8935588665357077]
Domain Adaptation (DA) aims to mitigate the domain shift problem when transferring knowledge from one domain to another.
Unsupervised DA (UDA) deals with a labeled source domain and an unlabeled target domain.
arXiv Detail & Related papers (2021-12-13T15:55:23Z) - Domain adaptation for person re-identification on new unlabeled data
using AlignedReID++ [0.0]
Domain adaptation is done by using pseudo-labels generated using an unsupervised learning strategy.
Our results show that domain adaptation techniques really improve the performance of the CNN when applied in the target domain.
arXiv Detail & Related papers (2021-06-29T19:58:04Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Unsupervised Multi-Source Domain Adaptation for Person Re-Identification [39.817734080890695]
Unsupervised domain adaptation (UDA) methods for person re-identification (re-ID) aim at transferring re-ID knowledge from labeled source data to unlabeled target data.
We introduce the multi-source concept into UDA person re-ID field, where multiple source datasets are used during training.
The proposed method outperforms state-of-the-art UDA person re-ID methods by a large margin, and even achieves comparable performance to the supervised approaches without any post-processing techniques.
arXiv Detail & Related papers (2021-04-27T03:33:35Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.