Discriminative Feature Representation with Spatio-temporal Cues for
Vehicle Re-identification
- URL: http://arxiv.org/abs/2011.06852v1
- Date: Fri, 13 Nov 2020 10:50:21 GMT
- Title: Discriminative Feature Representation with Spatio-temporal Cues for
Vehicle Re-identification
- Authors: J. Tu, C. Chen, X. Huang, J. He and X. Guan
- Abstract summary: Vehicle-identification (re-ID) aims to discover and match the target vehicles from a gallery image set taken by different cameras on a wide range of road networks.
We propose a feature representation with novel clues (DFR-ST) for vehicle re-ID.
It is capable of building robust features in the embedding space by involving appearance and retemporal information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vehicle re-identification (re-ID) aims to discover and match the target
vehicles from a gallery image set taken by different cameras on a wide range of
road networks. It is crucial for lots of applications such as security
surveillance and traffic management. The remarkably similar appearances of
distinct vehicles and the significant changes of viewpoints and illumination
conditions take grand challenges to vehicle re-ID. Conventional solutions focus
on designing global visual appearances without sufficient consideration of
vehicles' spatiotamporal relationships in different images. In this paper, we
propose a novel discriminative feature representation with spatiotemporal clues
(DFR-ST) for vehicle re-ID. It is capable of building robust features in the
embedding space by involving appearance and spatio-temporal information. Based
on this multi-modal information, the proposed DFR-ST constructs an appearance
model for a multi-grained visual representation by a two-stream architecture
and a spatio-temporal metric to provide complementary information. Experimental
results on two public datasets demonstrate DFR-ST outperforms the
state-of-the-art methods, which validate the effectiveness of the proposed
method.
Related papers
- Cross-Modality Perturbation Synergy Attack for Person Re-identification [66.48494594909123]
The main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.
Existing attack methods have primarily focused on the characteristics of the visible image modality.
This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - VehicleGAN: Pair-flexible Pose Guided Image Synthesis for Vehicle Re-identification [27.075761782915496]
This paper proposes to synthesize a large number of vehicle images in the target pose.
Considering the paired data of the same vehicles in different traffic surveillance cameras might be not available in the real world, we propose VehicleGAN.
Because of the feature distribution difference between real and synthetic data, we propose a new Joint Metric Learning (JML) via effective feature-level fusion.
arXiv Detail & Related papers (2023-11-27T19:34:04Z) - Spatial-temporal Vehicle Re-identification [3.7748602100709534]
We propose a spatial-temporal vehicle ReID framework that estimates reliable camera network topology.
Based on the proposed methods, we performed superior performance on the public dataset (VeRi776) by 99.64% of rank-1 accuracy.
arXiv Detail & Related papers (2023-09-03T13:07:38Z) - Multi-query Vehicle Re-identification: Viewpoint-conditioned Network,
Unified Dataset and New Metric [30.344288906037345]
We propose a more realistic and easily accessible task, called multi-query vehicle Re-ID.
We design a novel viewpoint-conditioned network (VCNet), which adaptively combines the complementary information from different vehicle viewpoints.
Second, we create a unified benchmark dataset, taken by 6142 cameras from a real-life transportation surveillance system.
Third, we design a new evaluation metric, called mean cross-scene precision (mCSP), which measures the ability of cross-scene recognition.
arXiv Detail & Related papers (2023-05-25T06:22:03Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Discriminative-Region Attention and Orthogonal-View Generation Model for
Vehicle Re-Identification [7.5366501970852955]
Multiple challenges hamper the applications of vision-based vehicle Re-ID methods.
The proposed DRA model can automatically extract the discriminative region features, which can distinguish similar vehicles.
And the OVG model can generate multi-view features based on the input view features to reduce the impact of viewpoint mismatches.
arXiv Detail & Related papers (2022-04-28T07:46:03Z) - Pluggable Weakly-Supervised Cross-View Learning for Accurate Vehicle
Re-Identification [53.6218051770131]
Cross-view consistent feature representation is key for accurate vehicle ReID.
Existing approaches resort to supervised cross-view learning using extensive extra viewpoints annotations.
We present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID.
arXiv Detail & Related papers (2021-03-09T11:51:09Z) - Trends in Vehicle Re-identification Past, Present, and Future: A
Comprehensive Review [2.9093633827040724]
Vehicle re-id matches targeted vehicle over-overlapping views in multiple camera network views.
This paper gives a comprehensive description of the various vehicle re-id technologies, methods, datasets, and a comparison of different methodologies.
arXiv Detail & Related papers (2021-02-19T05:02:24Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - The Devil is in the Details: Self-Supervised Attention for Vehicle
Re-Identification [75.3310894042132]
Self-supervised Attention for Vehicle Re-identification (SAVER) is a novel approach to effectively learn vehicle-specific discriminative features.
We show that SAVER improves upon the state-of-the-art on challenging VeRi, VehicleID, Vehicle-1M and VERI-Wild datasets.
arXiv Detail & Related papers (2020-04-14T02:24:47Z) - Parsing-based View-aware Embedding Network for Vehicle Re-Identification [138.11983486734576]
We propose a parsing-based view-aware embedding network (PVEN) to achieve the view-aware feature alignment and enhancement for vehicle ReID.
The experiments conducted on three datasets show that our model outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-04-10T13:06:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.