Multi-query Vehicle Re-identification: Viewpoint-conditioned Network,
Unified Dataset and New Metric
- URL: http://arxiv.org/abs/2305.15764v1
- Date: Thu, 25 May 2023 06:22:03 GMT
- Title: Multi-query Vehicle Re-identification: Viewpoint-conditioned Network,
Unified Dataset and New Metric
- Authors: Aihua Zheng, Chaobin Zhang, Weijun Zhang, Chenglong Li, Jin Tang,
Chang Tan, Ruoran Jia
- Abstract summary: We propose a more realistic and easily accessible task, called multi-query vehicle Re-ID.
We design a novel viewpoint-conditioned network (VCNet), which adaptively combines the complementary information from different vehicle viewpoints.
Second, we create a unified benchmark dataset, taken by 6142 cameras from a real-life transportation surveillance system.
Third, we design a new evaluation metric, called mean cross-scene precision (mCSP), which measures the ability of cross-scene recognition.
- Score: 30.344288906037345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing vehicle re-identification methods mainly rely on the single query,
which has limited information for vehicle representation and thus significantly
hinders the performance of vehicle Re-ID in complicated surveillance networks.
In this paper, we propose a more realistic and easily accessible task, called
multi-query vehicle Re-ID, which leverages multiple queries to overcome
viewpoint limitation of single one. Based on this task, we make three major
contributions. First, we design a novel viewpoint-conditioned network (VCNet),
which adaptively combines the complementary information from different vehicle
viewpoints, for multi-query vehicle Re-ID. Moreover, to deal with the problem
of missing vehicle viewpoints, we propose a cross-view feature recovery module
which recovers the features of the missing viewpoints by learnt the correlation
between the features of available and missing viewpoints. Second, we create a
unified benchmark dataset, taken by 6142 cameras from a real-life
transportation surveillance system, with comprehensive viewpoints and large
number of crossed scenes of each vehicle for multi-query vehicle Re-ID
evaluation. Finally, we design a new evaluation metric, called mean cross-scene
precision (mCSP), which measures the ability of cross-scene recognition by
suppressing the positive samples with similar viewpoints from same camera.
Comprehensive experiments validate the superiority of the proposed method
against other methods, as well as the effectiveness of the designed metric in
the evaluation of multi-query vehicle Re-ID.
Related papers
- Discriminative-Region Attention and Orthogonal-View Generation Model for
Vehicle Re-Identification [7.5366501970852955]
Multiple challenges hamper the applications of vision-based vehicle Re-ID methods.
The proposed DRA model can automatically extract the discriminative region features, which can distinguish similar vehicles.
And the OVG model can generate multi-view features based on the input view features to reduce the impact of viewpoint mismatches.
arXiv Detail & Related papers (2022-04-28T07:46:03Z) - Pluggable Weakly-Supervised Cross-View Learning for Accurate Vehicle
Re-Identification [53.6218051770131]
Cross-view consistent feature representation is key for accurate vehicle ReID.
Existing approaches resort to supervised cross-view learning using extensive extra viewpoints annotations.
We present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID.
arXiv Detail & Related papers (2021-03-09T11:51:09Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Viewpoint-aware Progressive Clustering for Unsupervised Vehicle
Re-identification [36.60241974421236]
We propose a novel viewpoint-aware clustering algorithm for unsupervised vehicle Re-ID.
In particular, we first divide the entire feature space into different subspaces according to the predicted viewpoints and then perform a progressive clustering to mine the accurate relationship among samples.
arXiv Detail & Related papers (2020-11-18T05:40:14Z) - Towards Autonomous Driving: a Multi-Modal 360$^{\circ}$ Perception
Proposal [87.11988786121447]
This paper presents a framework for 3D object detection and tracking for autonomous vehicles.
The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection.
A variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack.
arXiv Detail & Related papers (2020-08-21T20:36:21Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - The Devil is in the Details: Self-Supervised Attention for Vehicle
Re-Identification [75.3310894042132]
Self-supervised Attention for Vehicle Re-identification (SAVER) is a novel approach to effectively learn vehicle-specific discriminative features.
We show that SAVER improves upon the state-of-the-art on challenging VeRi, VehicleID, Vehicle-1M and VERI-Wild datasets.
arXiv Detail & Related papers (2020-04-14T02:24:47Z) - Parsing-based View-aware Embedding Network for Vehicle Re-Identification [138.11983486734576]
We propose a parsing-based view-aware embedding network (PVEN) to achieve the view-aware feature alignment and enhancement for vehicle ReID.
The experiments conducted on three datasets show that our model outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-04-10T13:06:09Z) - DCDLearn: Multi-order Deep Cross-distance Learning for Vehicle
Re-Identification [22.547915009758256]
This paper formulates a multi-order deep cross-distance learning model for vehicle re-identification.
One-view CycleGAN model is developed to alleviate exhaustive and enumerative cross-camera matching problem.
Experiments on three vehicle Re-ID datasets demonstrate that the proposed method achieves significant improvement over the state-of-the-arts.
arXiv Detail & Related papers (2020-03-25T10:46:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.