Unity Style Transfer for Person Re-Identification
- URL: http://arxiv.org/abs/2003.02068v1
- Date: Wed, 4 Mar 2020 13:22:57 GMT
- Title: Unity Style Transfer for Person Re-Identification
- Authors: Chong Liu and Xiaojun Chang and Yi-Dong Shen
- Abstract summary: Style variation has been a major challenge for person re-identification, which aims to match the same pedestrians across different cameras.
We propose a UnityStyle adaption method, which can smooth the style disparities within the same camera and across different cameras.
We conduct extensive experiments on widely used benchmark datasets to evaluate the performance of the proposed framework.
- Score: 54.473456125525196
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Style variation has been a major challenge for person re-identification,
which aims to match the same pedestrians across different cameras. Existing
works attempted to address this problem with camera-invariant descriptor
subspace learning. However, there will be more image artifacts when the
difference between the images taken by different cameras is larger. To solve
this problem, we propose a UnityStyle adaption method, which can smooth the
style disparities within the same camera and across different cameras.
Specifically, we firstly create UnityGAN to learn the style changes between
cameras, producing shape-stable style-unity images for each camera, which is
called UnityStyle images. Meanwhile, we use UnityStyle images to eliminate
style differences between different images, which makes a better match between
query and gallery. Then, we apply the proposed method to Re-ID models,
expecting to obtain more style-robust depth features for querying. We conduct
extensive experiments on widely used benchmark datasets to evaluate the
performance of the proposed framework, the results of which confirm the
superiority of the proposed model.
Related papers
- Distractors-Immune Representation Learning with Cross-modal Contrastive Regularization for Change Captioning [71.14084801851381]
Change captioning aims to succinctly describe the semantic change between a pair of similar images.
Most existing methods directly capture the difference between them, which risk obtaining error-prone difference features.
We propose a distractors-immune representation learning network that correlates the corresponding channels of two image representations.
arXiv Detail & Related papers (2024-07-16T13:00:33Z) - Camera-aware Label Refinement for Unsupervised Person Re-identification [19.099192313056296]
Unsupervised person re-identification aims to retrieve images of a specified person without identity labels.
Recent unsupervised Re-ID approaches adopt clustering-based methods to measure cross-camera feature similarity.
We introduce a textbfCamera-textbfAware textbfLabel textbfRefinement(CALR) framework that reduces camera discrepancy by clustering intra-camera similarity.
arXiv Detail & Related papers (2024-03-25T06:22:27Z) - Learning Intra and Inter-Camera Invariance for Isolated Camera
Supervised Person Re-identification [6.477096324232456]
Cross-camera images are prone to being recognized as different IDs simply by camera style.
This paper studies person re-ID under such isolated camera supervised (ISCS) setting.
arXiv Detail & Related papers (2023-11-02T11:32:40Z) - Pseudo Labels Refinement with Intra-camera Similarity for Unsupervised
Person Re-identification [8.779246907359706]
Unsupervised person re-identification (Re-ID) aims to retrieve person images across cameras without any identity labels.
Most clustering-based methods roughly divide image features into clusters and neglect the feature distribution noise caused by domain shifts among different cameras.
We propose a novel label refinement framework with clustering intra-camera similarity.
arXiv Detail & Related papers (2023-04-25T08:04:12Z) - Generalizable Person Re-Identification via Viewpoint Alignment and
Fusion [74.30861504619851]
This work proposes to use a 3D dense pose estimation model and a texture mapping module to map pedestrian images to canonical view images.
Due to the imperfection of the texture mapping module, the canonical view images may lose the discriminative detail clues from the original images.
We show that our method can lead to superior performance over the existing approaches in various evaluation settings.
arXiv Detail & Related papers (2022-12-05T16:24:09Z) - Camera-aware Style Separation and Contrastive Learning for Unsupervised
Person Re-identification [16.045209899229548]
Unsupervised person re-identification (ReID) is a challenging task without data annotation.
We propose a camera-aware style separation and contrastive learning method (CA-UReID)
It can explicitly divide the learnable feature into camera-specific and camera-agnostic parts, reducing the influence of different cameras.
arXiv Detail & Related papers (2021-12-19T08:53:42Z) - Wide-Baseline Multi-Camera Calibration using Person Re-Identification [27.965850489928457]
We address the problem of estimating the 3D pose of a network of cameras for large-environment wide-baseline scenarios.
Treating people in the scene as "keypoints" and associating them across different camera views can be an alternative method for obtaining correspondences.
Our method first employs a re-ID method to associate human bounding boxes across cameras, then converts bounding box correspondences to point correspondences.
arXiv Detail & Related papers (2021-04-17T15:09:18Z) - Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for
Unsupervised Person Re-Identification [60.36551512902312]
unsupervised person re-identification (re-ID) aims to learn discriminative models with unlabeled data.
One popular method is to obtain pseudo-label by clustering and use them to optimize the model.
In this paper, we propose a unified framework to solve both problems.
arXiv Detail & Related papers (2021-03-08T09:13:06Z) - Camera-aware Proxies for Unsupervised Person Re-Identification [60.26031011794513]
This paper tackles the purely unsupervised person re-identification (Re-ID) problem that requires no annotations.
We propose to split each single cluster into multiple proxies and each proxy represents the instances coming from the same camera.
Based on the camera-aware proxies, we design both intra- and inter-camera contrastive learning components for our Re-ID model.
arXiv Detail & Related papers (2020-12-19T12:37:04Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.