Camera-Tracklet-Aware Contrastive Learning for Unsupervised Vehicle
Re-Identification
- URL: http://arxiv.org/abs/2109.06401v1
- Date: Tue, 14 Sep 2021 02:12:54 GMT
- Title: Camera-Tracklet-Aware Contrastive Learning for Unsupervised Vehicle
Re-Identification
- Authors: Jongmin Yu, Junsik Kim, Minkyung Kim, and Hyeontaek Oh
- Abstract summary: We propose camera-tracklet-aware contrastive learning (CTACL) using the multi-camera tracklet information without vehicle identity labels.
The proposed CTACL divides an unlabelled domain, i.e., entire vehicle images, into multiple camera-level images and conducts contrastive learning.
We demonstrate the effectiveness of our approach on video-based and image-based vehicle Re-ID datasets.
- Score: 4.5471611558189124
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, vehicle re-identification methods based on deep learning constitute
remarkable achievement. However, this achievement requires large-scale and
well-annotated datasets. In constructing the dataset, assigning globally
available identities (Ids) to vehicles captured from a great number of cameras
is labour-intensive, because it needs to consider their subtle appearance
differences or viewpoint variations. In this paper, we propose
camera-tracklet-aware contrastive learning (CTACL) using the multi-camera
tracklet information without vehicle identity labels. The proposed CTACL
divides an unlabelled domain, i.e., entire vehicle images, into multiple
camera-level subdomains and conducts contrastive learning within and beyond the
subdomains. The positive and negative samples for contrastive learning are
defined using tracklet Ids of each camera. Additionally, the domain adaptation
across camera networks is introduced to improve the generalisation performance
of learnt representations and alleviate the performance degradation resulted
from the domain gap between the subdomains. We demonstrate the effectiveness of
our approach on video-based and image-based vehicle Re-ID datasets.
Experimental results show that the proposed method outperforms the recent
state-of-the-art unsupervised vehicle Re-ID methods. The source code for this
paper is publicly available on
`https://github.com/andreYoo/CTAM-CTACL-VVReID.git'.
Related papers
- Revisiting Multi-Granularity Representation via Group Contrastive Learning for Unsupervised Vehicle Re-identification [2.4822156881137367]
We propose an unsupervised vehicle ReID framework (MGR-GCL)
It integrates a multi-granularity CNN representation for learning discriminative transferable features.
It generates pseudo labels for the target dataset, facilitating the domain adaptation process.
arXiv Detail & Related papers (2024-10-29T02:24:36Z) - Camera-Driven Representation Learning for Unsupervised Domain Adaptive
Person Re-identification [33.25577310265293]
We introduce a camera-driven curriculum learning framework that leverages camera labels to transfer knowledge from source to target domains progressively.
For each curriculum sequence, we generate pseudo labels of person images in a target domain to train a reID model in a supervised way.
We have observed that the pseudo labels are highly biased toward cameras, suggesting that person images obtained from the same camera are likely to have the same pseudo labels, even for different IDs.
arXiv Detail & Related papers (2023-08-23T04:01:56Z) - Camera Alignment and Weighted Contrastive Learning for Domain Adaptation
in Video Person ReID [17.90248359024435]
Systems for person re-identification (ReID) can achieve a high accuracy when trained on large fully-labeled image datasets.
The domain shift associated with diverse operational capture conditions (e.g., camera viewpoints and lighting) may translate to a significant decline in performance.
This paper focuses on unsupervised domain adaptation (UDA) for video-based ReID.
arXiv Detail & Related papers (2022-11-07T15:32:56Z) - A High-Accuracy Unsupervised Person Re-identification Method Using
Auxiliary Information Mined from Datasets [53.047542904329866]
We make use of auxiliary information mined from datasets for multi-modal feature learning.
This paper proposes three effective training tricks, including Restricted Label Smoothing Cross Entropy Loss (RLSCE), Weight Adaptive Triplet Loss (WATL) and Dynamic Training Iterations (DTI)
arXiv Detail & Related papers (2022-05-06T10:16:18Z) - Cross-Camera Feature Prediction for Intra-Camera Supervised Person
Re-identification across Distant Scenes [70.30052164401178]
Person re-identification (Re-ID) aims to match person images across non-overlapping camera views.
ICS-DS Re-ID uses cross-camera unpaired data with intra-camera identity labels for training.
Cross-camera feature prediction method to mine cross-camera self supervision information.
Joint learning of global-level and local-level features forms a global-local cross-camera feature prediction scheme.
arXiv Detail & Related papers (2021-07-29T11:27:50Z) - Unsupervised Pretraining for Object Detection by Patch Reidentification [72.75287435882798]
Unsupervised representation learning achieves promising performances in pre-training representations for object detectors.
This work proposes a simple yet effective representation learning method for object detection, named patch re-identification (Re-ID)
Our method significantly outperforms its counterparts on COCO in all settings, such as different training iterations and data percentages.
arXiv Detail & Related papers (2021-03-08T15:13:59Z) - Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for
Unsupervised Person Re-Identification [60.36551512902312]
unsupervised person re-identification (re-ID) aims to learn discriminative models with unlabeled data.
One popular method is to obtain pseudo-label by clustering and use them to optimize the model.
In this paper, we propose a unified framework to solve both problems.
arXiv Detail & Related papers (2021-03-08T09:13:06Z) - Unsupervised Vehicle Re-Identification via Self-supervised Metric
Learning using Feature Dictionary [1.7894377200944507]
Key challenge of unsupervised vehicle re-identification (Re-ID) is learning discriminative features from unlabelled vehicle images.
This paper addresses an unsupervised vehicle Re-ID method, which no need any types of a labelled dataset.
arXiv Detail & Related papers (2021-03-03T08:29:03Z) - Camera-aware Proxies for Unsupervised Person Re-Identification [60.26031011794513]
This paper tackles the purely unsupervised person re-identification (Re-ID) problem that requires no annotations.
We propose to split each single cluster into multiple proxies and each proxy represents the instances coming from the same camera.
Based on the camera-aware proxies, we design both intra- and inter-camera contrastive learning components for our Re-ID model.
arXiv Detail & Related papers (2020-12-19T12:37:04Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - DCDLearn: Multi-order Deep Cross-distance Learning for Vehicle
Re-Identification [22.547915009758256]
This paper formulates a multi-order deep cross-distance learning model for vehicle re-identification.
One-view CycleGAN model is developed to alleviate exhaustive and enumerative cross-camera matching problem.
Experiments on three vehicle Re-ID datasets demonstrate that the proposed method achieves significant improvement over the state-of-the-arts.
arXiv Detail & Related papers (2020-03-25T10:46:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.