Robust and Fast Vehicle Detection using Augmented Confidence Map
- URL: http://arxiv.org/abs/2304.14462v1
- Date: Thu, 27 Apr 2023 18:41:16 GMT
- Title: Robust and Fast Vehicle Detection using Augmented Confidence Map
- Authors: Hamam Mokayed and Palaiahnakote Shivakumara and Lama Alkhaled and
Rajkumar Saini and Muhammad Zeshan Afzal and Yan Chai Hum and Marcus Liwicki
- Abstract summary: We introduce the concept of augmentation which highlights the region of interest containing the vehicles.
The output of MR-MSER is supplied to fast CNN to generate a confidence map.
Unlike existing models that implement complicated models for vehicle detection, we explore the combination of a rough set and fuzzy-based models.
- Score: 10.261351772602543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vehicle detection in real-time scenarios is challenging because of the time
constraints and the presence of multiple types of vehicles with different
speeds, shapes, structures, etc. This paper presents a new method relied on
generating a confidence map-for robust and faster vehicle detection. To reduce
the adverse effect of different speeds, shapes, structures, and the presence of
several vehicles in a single image, we introduce the concept of augmentation
which highlights the region of interest containing the vehicles. The augmented
map is generated by exploring the combination of multiresolution analysis and
maximally stable extremal regions (MR-MSER). The output of MR-MSER is supplied
to fast CNN to generate a confidence map, which results in candidate regions.
Furthermore, unlike existing models that implement complicated models for
vehicle detection, we explore the combination of a rough set and fuzzy-based
models for robust vehicle detection. To show the effectiveness of the proposed
method, we conduct experiments on our dataset captured by drones and on several
vehicle detection benchmark datasets, namely, KITTI and UA-DETRAC. The results
on our dataset and the benchmark datasets show that the proposed method
outperforms the existing methods in terms of time efficiency and achieves a
good detection rate.
Related papers
- DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - A novel efficient Multi-view traffic-related object detection framework [17.50049841016045]
We propose a novel traffic-related framework named CEVAS to achieve efficient object detection using multi-view video data.
Results show that our framework significantly reduces response latency while achieving the same detection accuracy as the state-of-the-art methods.
arXiv Detail & Related papers (2023-02-23T06:42:37Z) - STC-IDS: Spatial-Temporal Correlation Feature Analyzing based Intrusion
Detection System for Intelligent Connected Vehicles [7.301018758489822]
We present a novel model for automotive intrusion detection by spatial-temporal correlation features of in-vehicle communication traffic (STC-IDS)
Specifically, the proposed model exploits an encoding-detection architecture. In the encoder part, spatial and temporal relations are encoded simultaneously.
The encoded information is then passed to the detector for generating forceful spatial-temporal attention features and enabling anomaly classification.
arXiv Detail & Related papers (2022-04-23T04:22:58Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Multi-Stream Attention Learning for Monocular Vehicle Velocity and
Inter-Vehicle Distance Estimation [25.103483428654375]
Vehicle velocity and inter-vehicle distance estimation are essential for ADAS (Advanced driver-assistance systems) and autonomous vehicles.
Recent studies focus on using a low-cost monocular camera to perceive the environment around the vehicle in a data-driven fashion.
MSANet is proposed to extract different aspects of features, e.g., spatial and contextual features, for joint vehicle velocity and inter-vehicle distance estimation.
arXiv Detail & Related papers (2021-10-22T06:14:12Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Dual-Modality Vehicle Anomaly Detection via Bilateral Trajectory Tracing [42.03797195839054]
We propose a dual-modality modularized methodology for the robust detection of abnormal vehicles.
For the vehicle detection and tracking module, we adopted YOLOv5 and multi-scale tracking to localize the anomalies.
Experiments conducted on the Track 4 testset of the NVIDIA 2021 AI City Challenge yielded a result of 0.9302 F1-Score and 3.4039 root mean square error (RMSE)
arXiv Detail & Related papers (2021-06-09T12:04:25Z) - Object Detection and Tracking Algorithms for Vehicle Counting: A
Comparative Analysis [3.093890460224435]
Authors deploy several state of the art object detection and tracking algorithms to detect and track different classes of vehicles.
Model combinations are validated and compared against the manually counted ground truths of over 9 hours' traffic video data.
Results demonstrate that the combination of CenterNet and Deep SORT, Detectron2 and Deep SORT, and YOLOv4 and Deep SORT produced the best overall counting percentage for all vehicles.
arXiv Detail & Related papers (2020-07-31T17:49:27Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.