Style Interleaved Learning for Generalizable Person Re-identification
- URL: http://arxiv.org/abs/2207.03132v3
- Date: Thu, 8 Jun 2023 01:32:38 GMT
- Title: Style Interleaved Learning for Generalizable Person Re-identification
- Authors: Wentao Tan and Changxing Ding and Pengfei Wang and Mingming Gong and
Kui Jia
- Abstract summary: We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
- Score: 69.03539634477637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) for person re-identification (ReID) is a
challenging problem, as access to target domain data is not permitted during
the training process. Most existing DG ReID methods update the feature
extractor and classifier parameters based on the same features. This common
practice causes the model to overfit to existing feature styles in the source
domain, resulting in sub-optimal generalization ability on target domains. To
solve this problem, we propose a novel style interleaved learning (IL)
framework. Unlike conventional learning strategies, IL incorporates two forward
propagations and one backward propagation for each iteration. We employ the
features of interleaved styles to update the feature extractor and classifiers
using different forward propagations, which helps to prevent the model from
overfitting to certain domain styles. To generate interleaved feature styles,
we further propose a new feature stylization approach. It produces a wide range
of meaningful styles that are both different and independent from the original
styles in the source domain, which caters to the IL methodology. Extensive
experimental results show that our model not only consistently outperforms
state-of-the-art methods on large-scale benchmarks for DG ReID, but also has
clear advantages in computational efficiency. The code is available at
https://github.com/WentaoTan/Interleaved-Learning.
Related papers
- Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - Enhancing Domain Adaptation through Prompt Gradient Alignment [16.618313165111793]
We develop a line of works based on prompt learning to learn both domain-invariant and specific features.
We cast UDA as a multiple-objective optimization problem in which each objective is represented by a domain loss.
Our method consistently surpasses other prompt-based baselines by a large margin on different UDA benchmarks.
arXiv Detail & Related papers (2024-06-13T17:40:15Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Open Domain Generalization with a Single Network by Regularization
Exploiting Pre-trained Features [37.518025833882334]
Open Domain Generalization (ODG) is a challenging task as it deals with distribution shifts and category shifts.
Previous work has used multiple source-specific networks, which involve a high cost.
This paper proposes a method that can handle ODG using only a single network.
arXiv Detail & Related papers (2023-12-08T16:22:10Z) - DGInStyle: Domain-Generalizable Semantic Segmentation with Image Diffusion Models and Stylized Semantic Control [68.14798033899955]
Large, pretrained latent diffusion models (LDMs) have demonstrated an extraordinary ability to generate creative content.
However, are they usable as large-scale data generators, e.g., to improve tasks in the perception stack, like semantic segmentation?
We investigate this question in the context of autonomous driving, and answer it with a resounding "yes"
arXiv Detail & Related papers (2023-12-05T18:34:12Z) - Domain Generalization with Correlated Style Uncertainty [4.844240089234632]
Style augmentation is a strong DG method taking advantage of instance-specific feature statistics.
We introduce Correlated Style Uncertainty (CSU), surpassing the limitations of linear generalization in style statistic space.
Our method's efficacy is established through extensive experimentation on diverse cross-domain computer vision and medical imaging classification tasks.
arXiv Detail & Related papers (2022-12-20T01:59:27Z) - Style Variable and Irrelevant Learning for Generalizable Person
Re-identification [2.9350185599710814]
We propose a Style Variable and Irrelevant Learning (SVIL) method to eliminate the effect of style factors on the model.
The SJM module can enrich the style diversity of the specific source domain and reduce the style differences of various source domains.
Our method outperforms the state-of-the-art methods on DG-ReID benchmarks by a large margin.
arXiv Detail & Related papers (2022-09-12T13:31:43Z) - Test-time Fourier Style Calibration for Domain Generalization [47.314071215317995]
We argue that reducing the gap between source and target styles can boost models' generalizability.
To solve the dilemma of having no access to the target domain during training, we introduce Test-time Style (TF-Cal)
We present an effective technique to Augment Amplitude Features (AAF) to complement TF-Cal.
arXiv Detail & Related papers (2022-05-13T02:43:03Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive
Object Re-ID [55.21702895051287]
Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain.
We propose a novel self-paced contrastive learning framework with hybrid memory.
Our method outperforms state-of-the-arts on multiple domain adaptation tasks of object re-ID.
arXiv Detail & Related papers (2020-06-04T09:12:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.