Viewpoint-aware Progressive Clustering for Unsupervised Vehicle
Re-identification
- URL: http://arxiv.org/abs/2011.09099v1
- Date: Wed, 18 Nov 2020 05:40:14 GMT
- Title: Viewpoint-aware Progressive Clustering for Unsupervised Vehicle
Re-identification
- Authors: Aihua Zheng, Xia Sun, Chenglong Li, Jin Tang
- Abstract summary: We propose a novel viewpoint-aware clustering algorithm for unsupervised vehicle Re-ID.
In particular, we first divide the entire feature space into different subspaces according to the predicted viewpoints and then perform a progressive clustering to mine the accurate relationship among samples.
- Score: 36.60241974421236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vehicle re-identification (Re-ID) is an active task due to its importance in
large-scale intelligent monitoring in smart cities. Despite the rapid progress
in recent years, most existing methods handle vehicle Re-ID task in a
supervised manner, which is both time and labor-consuming and limits their
application to real-life scenarios. Recently, unsupervised person Re-ID methods
achieve impressive performance by exploring domain adaption or clustering-based
techniques. However, one cannot directly generalize these methods to vehicle
Re-ID since vehicle images present huge appearance variations in different
viewpoints. To handle this problem, we propose a novel viewpoint-aware
clustering algorithm for unsupervised vehicle Re-ID. In particular, we first
divide the entire feature space into different subspaces according to the
predicted viewpoints and then perform a progressive clustering to mine the
accurate relationship among samples. Comprehensive experiments against the
state-of-the-art methods on two multi-viewpoint benchmark datasets VeRi and
VeRi-Wild validate the promising performance of the proposed method in both
with and without domain adaption scenarios while handling unsupervised vehicle
Re-ID.
Related papers
- Revisiting Multi-Granularity Representation via Group Contrastive Learning for Unsupervised Vehicle Re-identification [2.4822156881137367]
We propose an unsupervised vehicle ReID framework (MGR-GCL)
It integrates a multi-granularity CNN representation for learning discriminative transferable features.
It generates pseudo labels for the target dataset, facilitating the domain adaptation process.
arXiv Detail & Related papers (2024-10-29T02:24:36Z) - What Matters in Autonomous Driving Anomaly Detection: A Weakly Supervised Horizon [12.88166582566313]
Video anomaly detection (VAD) in autonomous driving scenario is an important task, however it involves several challenges due to the ego-centric views and moving camera.
Recent developments in weakly-supervised VAD methods have shown remarkable progress in detecting critical real-world anomalies in static camera scenario.
We aim to promote weakly-supervised method development for autonomous driving VAD.
arXiv Detail & Related papers (2024-08-10T14:04:52Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Multi-query Vehicle Re-identification: Viewpoint-conditioned Network,
Unified Dataset and New Metric [30.344288906037345]
We propose a more realistic and easily accessible task, called multi-query vehicle Re-ID.
We design a novel viewpoint-conditioned network (VCNet), which adaptively combines the complementary information from different vehicle viewpoints.
Second, we create a unified benchmark dataset, taken by 6142 cameras from a real-life transportation surveillance system.
Third, we design a new evaluation metric, called mean cross-scene precision (mCSP), which measures the ability of cross-scene recognition.
arXiv Detail & Related papers (2023-05-25T06:22:03Z) - ConMAE: Contour Guided MAE for Unsupervised Vehicle Re-Identification [8.950873153831735]
This work designs a Contour Guided Masked Autoencoder for Unsupervised Vehicle Re-Identification (ConMAE)
Considering that Masked Autoencoder (MAE) has shown excellent performance in self-supervised learning, this work designs a Contour Guided Masked Autoencoder for Unsupervised Vehicle Re-Identification (ConMAE)
arXiv Detail & Related papers (2023-02-11T12:10:25Z) - Discriminative-Region Attention and Orthogonal-View Generation Model for
Vehicle Re-Identification [7.5366501970852955]
Multiple challenges hamper the applications of vision-based vehicle Re-ID methods.
The proposed DRA model can automatically extract the discriminative region features, which can distinguish similar vehicles.
And the OVG model can generate multi-view features based on the input view features to reduce the impact of viewpoint mismatches.
arXiv Detail & Related papers (2022-04-28T07:46:03Z) - Unsupervised Pretraining for Object Detection by Patch Reidentification [72.75287435882798]
Unsupervised representation learning achieves promising performances in pre-training representations for object detectors.
This work proposes a simple yet effective representation learning method for object detection, named patch re-identification (Re-ID)
Our method significantly outperforms its counterparts on COCO in all settings, such as different training iterations and data percentages.
arXiv Detail & Related papers (2021-03-08T15:13:59Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - Parsing-based View-aware Embedding Network for Vehicle Re-Identification [138.11983486734576]
We propose a parsing-based view-aware embedding network (PVEN) to achieve the view-aware feature alignment and enhancement for vehicle ReID.
The experiments conducted on three datasets show that our model outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-04-10T13:06:09Z) - Unsupervised Domain Adaptation in Person re-ID via k-Reciprocal
Clustering and Large-Scale Heterogeneous Environment Synthesis [76.46004354572956]
We introduce an unsupervised domain adaptation approach for person re-identification.
Experimental results show that the proposed ktCUDA and SHRED approach achieves an average improvement of +5.7 mAP in re-identification performance.
arXiv Detail & Related papers (2020-01-14T17:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.