Vehicle Re-Identification Based on Complementary Features
- URL: http://arxiv.org/abs/2005.04463v1
- Date: Sat, 9 May 2020 15:24:51 GMT
- Title: Vehicle Re-Identification Based on Complementary Features
- Authors: Cunyuan Gao, Yi Hu, Yi Zhang, Rui Yao, Yong Zhou, Jiaqi Zhao
- Abstract summary: The purpose of vehicle Re-ID is to retrieve the same vehicle appeared across multiple cameras.
It could make a great contribution to the Intelligent Traffic System(ITS) and smart city.
Our method is to fuse features extracted from different networks in order to take advantages of these networks and achieve complementary features.
- Score: 18.633637024602795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present our solution to the vehicle re-identification
(vehicle Re-ID) track in AI City Challenge 2020 (AIC2020). The purpose of
vehicle Re-ID is to retrieve the same vehicle appeared across multiple cameras,
and it could make a great contribution to the Intelligent Traffic System(ITS)
and smart city. Due to the vehicle's orientation, lighting and inter-class
similarity, it is difficult to achieve robust and discriminative representation
feature. For the vehicle Re-ID track in AIC2020, our method is to fuse features
extracted from different networks in order to take advantages of these networks
and achieve complementary features. For each single model, several methods such
as multi-loss, filter grafting, semi-supervised are used to increase the
representation ability as better as possible. Top performance in City-Scale
Multi-Camera Vehicle Re-Identification demonstrated the advantage of our
methods, and we got 5-th place in the vehicle Re-ID track of AIC2020. The codes
are available at https://github.com/gggcy/AIC2020_ReID.
Related papers
- VehicleGAN: Pair-flexible Pose Guided Image Synthesis for Vehicle Re-identification [27.075761782915496]
This paper proposes to synthesize a large number of vehicle images in the target pose.
Considering the paired data of the same vehicles in different traffic surveillance cameras might be not available in the real world, we propose VehicleGAN.
Because of the feature distribution difference between real and synthetic data, we propose a new Joint Metric Learning (JML) via effective feature-level fusion.
arXiv Detail & Related papers (2023-11-27T19:34:04Z) - Complete Solution for Vehicle Re-ID in Surround-view Camera System [10.10765191655754]
Vehicle re-identification (Re-ID) is a critical component of the autonomous driving perception system.
It is difficult to identify the same vehicle in many picture frames due to the unique construction of the fisheye camera.
Our approach combines state-of-the-art accuracy with real-time performance.
arXiv Detail & Related papers (2022-12-08T07:52:55Z) - Enhanced Vehicle Re-identification for ITS: A Feature Fusion approach
using Deep Learning [0.0]
Vehicle re-identification has gained interest in the domain of computer vision and robotics.
In this paper, a framework is developed to perform the re-identification of vehicles across CCTV cameras.
The framework is tested on a dataset that contains 81 unique vehicle identities observed across 20 CCTV cameras.
arXiv Detail & Related papers (2022-08-13T05:59:16Z) - Discriminative-Region Attention and Orthogonal-View Generation Model for
Vehicle Re-Identification [7.5366501970852955]
Multiple challenges hamper the applications of vision-based vehicle Re-ID methods.
The proposed DRA model can automatically extract the discriminative region features, which can distinguish similar vehicles.
And the OVG model can generate multi-view features based on the input view features to reduce the impact of viewpoint mismatches.
arXiv Detail & Related papers (2022-04-28T07:46:03Z) - Vehicle Re-identification Method Based on Vehicle Attribute and Mutual
Exclusion Between Cameras [7.028589578216994]
We propose a vehicle attribute-guided method to re-rank vehicle Re-ID result.
The attributes used include vehicle orientation and vehicle brand.
Our method achieves mAP of 63.73% and rank-1 accuracy 76.61% in the CVPR 2021 AI City Challenge.
arXiv Detail & Related papers (2021-04-30T10:11:46Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Pluggable Weakly-Supervised Cross-View Learning for Accurate Vehicle
Re-Identification [53.6218051770131]
Cross-view consistent feature representation is key for accurate vehicle ReID.
Existing approaches resort to supervised cross-view learning using extensive extra viewpoints annotations.
We present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID.
arXiv Detail & Related papers (2021-03-09T11:51:09Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - The Devil is in the Details: Self-Supervised Attention for Vehicle
Re-Identification [75.3310894042132]
Self-supervised Attention for Vehicle Re-identification (SAVER) is a novel approach to effectively learn vehicle-specific discriminative features.
We show that SAVER improves upon the state-of-the-art on challenging VeRi, VehicleID, Vehicle-1M and VERI-Wild datasets.
arXiv Detail & Related papers (2020-04-14T02:24:47Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.