TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification
- URL: http://arxiv.org/abs/2111.14290v1
- Date: Mon, 29 Nov 2021 01:27:42 GMT
- Title: TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification
- Authors: Yichao Yan, Junjie Li, Shengcai Liao, Jie Qin, Bingbing Ni, Xiaokang
Yang
- Abstract summary: We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
- Score: 115.31432027711202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalizable person re-identification aims to apply a trained model
to unseen domains. Prior works either combine the data in all the training
domains to capture domain-invariant features, or adopt a mixture of experts to
investigate domain-specific information. In this work, we argue that both
domain-specific and domain-invariant features are crucial for improving the
generalization ability of re-id models. To this end, we design a novel
framework, which we name two-stream adaptive learning (TAL), to simultaneously
model these two kinds of information. Specifically, a domain-specific stream is
proposed to capture training domain statistics with batch normalization (BN)
parameters, while an adaptive matching layer is designed to dynamically
aggregate domain-level information. In the meantime, we design an adaptive BN
layer in the domain-invariant stream, to approximate the statistics of various
unseen domains. These two streams work adaptively and collaboratively to learn
generalizable re-id features. Our framework can be applied to both
single-source and multi-source domain generalization tasks, where experimental
results show that our framework notably outperforms the state-of-the-art
methods.
Related papers
- Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Generalizable Person Re-identification with Relevance-aware Mixture of
Experts [45.13716166680772]
We propose a novel method called the relevance-aware mixture of experts (RaMoE)
RaMoE uses an effective voting-based mixture mechanism to dynamically leverage source domains' diverse characteristics to improve the model's generalization.
Considering the target domains' invisibility during training, we propose a novel learning-to-learn algorithm combined with our relation alignment loss to update the voting network.
arXiv Detail & Related papers (2021-05-19T14:19:34Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - Robust Domain-Free Domain Generalization with Class-aware Alignment [4.442096198968069]
Domain-Free Domain Generalization (DFDG) is a model-agnostic method to achieve better generalization performance on the unseen test domain.
DFDG uses novel strategies to learn domain-invariant class-discriminative features.
It obtains competitive performance on both time series sensor and image classification public datasets.
arXiv Detail & Related papers (2021-02-17T17:46:06Z) - Semi-Supervised Disentangled Framework for Transferable Named Entity
Recognition [27.472171967604602]
We present a semi-supervised framework for transferable NER, which disentangles the domain-invariant latent variables and domain-specific latent variables.
Our model can obtain state-of-the-art performance with cross-domain and cross-lingual NER benchmark data sets.
arXiv Detail & Related papers (2020-12-22T02:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.