Nighttime Driver Behavior Prediction Using Taillight Signal Recognition
via CNN-SVM Classifier
- URL: http://arxiv.org/abs/2310.16706v1
- Date: Wed, 25 Oct 2023 15:23:33 GMT
- Title: Nighttime Driver Behavior Prediction Using Taillight Signal Recognition
via CNN-SVM Classifier
- Authors: Amir Hossein Barshooi and Elmira Bagheri
- Abstract summary: This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles.
The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road.
To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images.
- Score: 2.44755919161855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to enhance the ability to predict nighttime driving behavior
by identifying taillights of both human-driven and autonomous vehicles. The
proposed model incorporates a customized detector designed to accurately detect
front-vehicle taillights on the road. At the beginning of the detector, a
learnable pre-processing block is implemented, which extracts deep features
from input images and calculates the data rarity for each feature. In the next
step, drawing inspiration from soft attention, a weighted binary mask is
designed that guides the model to focus more on predetermined regions. This
research utilizes Convolutional Neural Networks (CNNs) to extract
distinguishing characteristics from these areas, then reduces dimensions using
Principal Component Analysis (PCA). Finally, the Support Vector Machine (SVM)
is used to predict the behavior of the vehicles. To train and evaluate the
model, a large-scale dataset is collected from two types of dash-cams and
Insta360 cameras from the rear view of Ford Motor Company vehicles. This
dataset includes over 12k frames captured during both daytime and nighttime
hours. To address the limited nighttime data, a unique pixel-wise image
processing technique is implemented to convert daytime images into realistic
night images. The findings from the experiments demonstrate that the proposed
methodology can accurately categorize vehicle behavior with 92.14% accuracy,
97.38% specificity, 92.09% sensitivity, 92.10% F1-measure, and 0.895 Cohen's
Kappa Statistic. Further details are available at
https://github.com/DeepCar/Taillight_Recognition.
Related papers
- Vehicle Trajectory Prediction on Highways Using Bird Eye View
Representations and Deep Learning [0.5420492913071214]
This work presents a novel method for predicting vehicle trajectories in highway scenarios using efficient bird's eye view representations and convolutional neural networks.
The U-net model has been selected as the prediction kernel to generate future visual representations of the scene using an image-to-image regression approach.
A method has been implemented to extract vehicle positions from the generated graphical representations to achieve subpixel resolution.
arXiv Detail & Related papers (2022-07-04T13:39:46Z) - Monocular Vision-based Prediction of Cut-in Maneuvers with LSTM Networks [0.0]
This study proposes a method to predict potentially dangerous cut-in maneuvers happening in the ego lane.
We follow a computer vision-based approach that only employs a single in-vehicle RGB camera.
Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step.
arXiv Detail & Related papers (2022-03-21T02:30:36Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - 2nd Place Solution for Waymo Open Dataset Challenge - Real-time 2D
Object Detection [26.086623067939605]
In this report, we introduce a real-time method to detect the 2D objects from images.
We leverage accelerationRT to optimize the inference time of our detection pipeline.
Our framework achieves the latency of 45.8ms/frame on an Nvidia Tesla V100 GPU.
arXiv Detail & Related papers (2021-06-16T11:32:03Z) - Data-driven vehicle speed detection from synthetic driving simulator
images [0.440401067183266]
We explore the use of synthetic images generated from a driving simulator to address vehicle speed detection.
We generate thousands of images with variability corresponding to multiple speeds, different vehicle types and colors, and lighting and weather conditions.
Two different approaches to map the sequence of images to an output speed (regression) are studied, including CNN-GRU and 3D-CNN.
arXiv Detail & Related papers (2021-04-20T11:26:13Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - PerMO: Perceiving More at Once from a Single Image for Autonomous
Driving [76.35684439949094]
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image.
Our approach combines the strengths of deep learning and the elegance of traditional techniques.
We have integrated these algorithms with an autonomous driving system.
arXiv Detail & Related papers (2020-07-16T05:02:45Z) - Vehicle Position Estimation with Aerial Imagery from Unmanned Aerial
Vehicles [4.555256739812733]
This work describes a process to estimate a precise vehicle position from aerial imagery.
The state-of-the-art deep neural network Mask-RCNN is applied for that purpose.
A mean accuracy of 20 cm can be achieved with flight altitudes up to 100 m, Full-HD resolution and a frame-by-frame detection.
arXiv Detail & Related papers (2020-04-17T12:29:40Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.