Threat Detection In Self-Driving Vehicles Using Computer Vision
- URL: http://arxiv.org/abs/2209.02438v1
- Date: Tue, 6 Sep 2022 12:01:07 GMT
- Title: Threat Detection In Self-Driving Vehicles Using Computer Vision
- Authors: Umang Goenka, Aaryan Jagetia, Param Patil, Akshay Singh, Taresh
Sharma, Poonam Saini
- Abstract summary: We propose a threat detection mechanism for autonomous self-driving cars using dashcam videos.
There are four major components, namely, YOLO to identify the objects, advanced lane detection algorithm, multi regression model to measure the distance of the object from the camera.
The final accuracy of our proposed Threat Detection Model (TDM) is 82.65%.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-road obstacle detection is an important field of research that falls in
the scope of intelligent transportation infrastructure systems. The use of
vision-based approaches results in an accurate and cost-effective solution to
such systems. In this research paper, we propose a threat detection mechanism
for autonomous self-driving cars using dashcam videos to ensure the presence of
any unwanted obstacle on the road that falls within its visual range. This
information can assist the vehicle's program to en route safely. There are four
major components, namely, YOLO to identify the objects, advanced lane detection
algorithm, multi regression model to measure the distance of the object from
the camera, the two-second rule for measuring the safety, and limiting speed.
In addition, we have used the Car Crash Dataset(CCD) for calculating the
accuracy of the model. The YOLO algorithm gives an accuracy of around 93%. The
final accuracy of our proposed Threat Detection Model (TDM) is 82.65%.
Related papers
- Vehicle Safety Management System [0.0]
This study suggests a real-time overtaking assistance system that uses a combination of the You Only Look Once (YOLO) object detection algorithm and stereo vision techniques.
The proposed system has been implemented using Stereo vision for distance analysis and You Only Look Once (YOLO) for object identification.
arXiv Detail & Related papers (2023-04-16T16:15:25Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - A Real-Time Wrong-Way Vehicle Detection Based on YOLO and Centroid
Tracking [0.0]
Wrong-way driving is one of the main causes of road accidents and traffic jam all over the world.
In this paper, we propose an automatic wrong-way vehicle detection system from on-road surveillance camera footage.
arXiv Detail & Related papers (2022-10-19T00:53:28Z) - A Quality Index Metric and Method for Online Self-Assessment of
Autonomous Vehicles Sensory Perception [164.93739293097605]
We propose a novel evaluation metric, named as the detection quality index (DQI), which assesses the performance of camera-based object detection algorithms.
We have developed a superpixel-based attention network (SPA-NET) that utilizes raw image pixels and superpixels as input to predict the proposed DQI evaluation metric.
arXiv Detail & Related papers (2022-03-04T22:16:50Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Vehicle trajectory prediction in top-view image sequences based on deep
learning method [1.181206257787103]
Estimating and predicting surrounding vehicles' movement is essential for an automated vehicle and advanced safety systems.
A model with low computational complexity is proposed, which is trained by images taken from the road's aerial image.
The proposed model can predict the vehicle's future path in any freeway only by viewing the images related to the history of the target vehicle's movement and its neighbors.
arXiv Detail & Related papers (2021-02-02T20:48:19Z) - Computer Vision based Accident Detection for Autonomous Vehicles [0.0]
We propose a novel support system for self-driving cars that detects vehicular accidents through a dashboard camera.
The framework has been tested on a custom dataset of dashcam footage and achieves a high accident detection rate while maintaining a low false alarm rate.
arXiv Detail & Related papers (2020-12-20T08:51:10Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.