Enhanced Vehicle Re-identification for ITS: A Feature Fusion approach
using Deep Learning
- URL: http://arxiv.org/abs/2208.06579v1
- Date: Sat, 13 Aug 2022 05:59:16 GMT
- Title: Enhanced Vehicle Re-identification for ITS: A Feature Fusion approach
using Deep Learning
- Authors: Ashutosh Holla B, Manohara Pai M.M, Ujjwal Verma, Radhika M. Pai
- Abstract summary: Vehicle re-identification has gained interest in the domain of computer vision and robotics.
In this paper, a framework is developed to perform the re-identification of vehicles across CCTV cameras.
The framework is tested on a dataset that contains 81 unique vehicle identities observed across 20 CCTV cameras.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, the development of robust Intelligent transportation systems
(ITS) is tackled across the globe to provide better traffic efficiency by
reducing frequent traffic problems. As an application of ITS, vehicle
re-identification has gained ample interest in the domain of computer vision
and robotics. Convolutional neural network (CNN) based methods are developed to
perform vehicle re-identification to address key challenges such as occlusion,
illumination change, scale, etc. The advancement of transformers in computer
vision has opened an opportunity to explore the re-identification process
further to enhance performance. In this paper, a framework is developed to
perform the re-identification of vehicles across CCTV cameras. To perform
re-identification, the proposed framework fuses the vehicle representation
learned using a CNN and a transformer model. The framework is tested on a
dataset that contains 81 unique vehicle identities observed across 20 CCTV
cameras. From the experiments, the fused vehicle re-identification framework
yields an mAP of 61.73% which is significantly better when compared with the
standalone CNN or transformer model.
Related papers
- Passenger hazard perception based on EEG signals for highly automated driving vehicles [23.322910031715583]
This study explores neural mechanisms in passenger-vehicle interactions, leading to the development of a Passenger Cognitive Model (PCM) and the Passenger EEG Decoding Strategy (PEDS)
Central to PEDS is a novel Convolutional Recurrent Neural Network (CRNN) that captures spatial and temporal EEG data patterns.
Our findings highlight the predictive power of pre-event EEG data, enhancing the detection of hazardous scenarios and offering a network-driven framework for safer autonomous vehicles.
arXiv Detail & Related papers (2024-08-29T07:32:30Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Spatial-temporal Vehicle Re-identification [3.7748602100709534]
We propose a spatial-temporal vehicle ReID framework that estimates reliable camera network topology.
Based on the proposed methods, we performed superior performance on the public dataset (VeRi776) by 99.64% of rank-1 accuracy.
arXiv Detail & Related papers (2023-09-03T13:07:38Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study [38.65843674620544]
We introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions.
A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology.
arXiv Detail & Related papers (2021-12-07T23:42:21Z) - Vehicle Re-identification Method Based on Vehicle Attribute and Mutual
Exclusion Between Cameras [7.028589578216994]
We propose a vehicle attribute-guided method to re-rank vehicle Re-ID result.
The attributes used include vehicle orientation and vehicle brand.
Our method achieves mAP of 63.73% and rank-1 accuracy 76.61% in the CVPR 2021 AI City Challenge.
arXiv Detail & Related papers (2021-04-30T10:11:46Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Vehicle Re-Identification Based on Complementary Features [18.633637024602795]
The purpose of vehicle Re-ID is to retrieve the same vehicle appeared across multiple cameras.
It could make a great contribution to the Intelligent Traffic System(ITS) and smart city.
Our method is to fuse features extracted from different networks in order to take advantages of these networks and achieve complementary features.
arXiv Detail & Related papers (2020-05-09T15:24:51Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.