Style Variable and Irrelevant Learning for Generalizable Person
Re-identification
- URL: http://arxiv.org/abs/2209.05235v1
- Date: Mon, 12 Sep 2022 13:31:43 GMT
- Title: Style Variable and Irrelevant Learning for Generalizable Person
Re-identification
- Authors: Haobo Chen, Chuyang Zhao, Kai Tu, Junru Chen, Yadong Li, Boxun Li
- Abstract summary: We propose a Style Variable and Irrelevant Learning (SVIL) method to eliminate the effect of style factors on the model.
The SJM module can enrich the style diversity of the specific source domain and reduce the style differences of various source domains.
Our method outperforms the state-of-the-art methods on DG-ReID benchmarks by a large margin.
- Score: 2.9350185599710814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, due to the poor performance of supervised person re-identification
(ReID) to an unseen domain, Domain Generalization (DG) person ReID has
attracted a lot of attention which aims to learn a domain-insensitive model and
can resist the influence of domain bias. In this paper, we first verify through
an experiment that style factors are a vital part of domain bias. Base on this
conclusion, we propose a Style Variable and Irrelevant Learning (SVIL) method
to eliminate the effect of style factors on the model. Specifically, we design
a Style Jitter Module (SJM) in SVIL. The SJM module can enrich the style
diversity of the specific source domain and reduce the style differences of
various source domains. This leads to the model focusing on identity-relevant
information and being insensitive to the style changes. Besides, we organically
combine the SJM module with a meta-learning algorithm, maximizing the benefits
and further improving the generalization ability of the model. Note that our
SJM module is plug-and-play and inference cost-free. Extensive experiments
confirm the effectiveness of our SVIL and our method outperforms the
state-of-the-art methods on DG-ReID benchmarks by a large margin.
Related papers
- StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Unbiased Faster R-CNN for Single-source Domain Generalized Object Detection [35.71100602593928]
We propose an Unbiased Faster R-CNN (UFR) for generalizable feature learning.
Specifically, we formulate in object detection from a causal perspective and construct a Structural Causal Model (SCM) to analyze the data bias and feature bias in the task.
Experimental results on five scenes demonstrate the prominent generalization ability of our method, with an improvement of 3.9% mAP on the Night-Clear scene.
arXiv Detail & Related papers (2024-05-24T05:34:23Z) - DPStyler: Dynamic PromptStyler for Source-Free Domain Generalization [43.67213274161226]
Source-Free Domain Generalization (SFDG) aims to develop a model that works for unseen target domains without relying on any source domain.
Research in SFDG primarily bulids upon the existing knowledge of large-scale vision-language models.
We introduce Dynamic PromptStyler (DPStyler), comprising Style Generation and Style Removal modules.
arXiv Detail & Related papers (2024-03-25T12:31:01Z) - HiCAST: Highly Customized Arbitrary Style Transfer with Adapter Enhanced
Diffusion Models [84.12784265734238]
The goal of Arbitrary Style Transfer (AST) is injecting the artistic features of a style reference into a given image/video.
We propose HiCAST, which is capable of explicitly customizing the stylization results according to various source of semantic clues.
A novel learning objective is leveraged for video diffusion model training, which significantly improve cross-frame temporal consistency.
arXiv Detail & Related papers (2024-01-11T12:26:23Z) - Style-Hallucinated Dual Consistency Learning: A Unified Framework for
Visual Domain Generalization [113.03189252044773]
We propose a unified framework, Style-HAllucinated Dual consistEncy learning (SHADE), to handle domain shift in various visual tasks.
Our versatile SHADE can significantly enhance the generalization in various visual recognition tasks, including image classification, semantic segmentation and object detection.
arXiv Detail & Related papers (2022-12-18T11:42:51Z) - Adversarial Style Augmentation for Domain Generalized Urban-Scene
Segmentation [120.96012935286913]
We propose a novel adversarial style augmentation approach, which can generate hard stylized images during training.
Experiments on two synthetic-to-real semantic segmentation benchmarks demonstrate that AdvStyle can significantly improve the model performance on unseen real domains.
arXiv Detail & Related papers (2022-07-11T14:01:25Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Few-shot learning with improved local representations via bias rectify
module [13.230636224045137]
We propose a Deep Bias Rectify Network (DBRN) to fully exploit the spatial information that exists in the structure of the feature representations.
bias rectify module is able to focus on the features that are more discriminative for classification by given different weights.
To make full use of the training data, we design a prototype augment mechanism that can make the prototypes generated from the support set to be more representative.
arXiv Detail & Related papers (2021-11-01T08:08:00Z) - Style Normalization and Restitution for Generalizable Person
Re-identification [89.482638433932]
We design a generalizable person ReID framework which trains a model on source domains yet is able to generalize/perform well on target domains.
We propose a simple yet effective Style Normalization and Restitution (SNR) module.
Our models empowered by the SNR modules significantly outperform the state-of-the-art domain generalization approaches on multiple widely-used person ReID benchmarks.
arXiv Detail & Related papers (2020-05-22T07:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.