Visual Saliency Detection in Advanced Driver Assistance Systems
- URL: http://arxiv.org/abs/2308.03770v1
- Date: Wed, 26 Jul 2023 15:41:54 GMT
- Title: Visual Saliency Detection in Advanced Driver Assistance Systems
- Authors: Francesco Rundo, Michael Sebastian Rundo, Concetto Spampinato
- Abstract summary: We present an intelligent system that combines a drowsiness detection system for drivers with a scene comprehension pipeline based on saliency.
We employ an innovative biosensor embedded on the car steering wheel to monitor the driver.
A dedicated 1D temporal deep convolutional network has been devised to classify the collected PPG time-series.
- Score: 7.455416595124159
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual Saliency refers to the innate human mechanism of focusing on and
extracting important features from the observed environment. Recently, there
has been a notable surge of interest in the field of automotive research
regarding the estimation of visual saliency. While operating a vehicle, drivers
naturally direct their attention towards specific objects, employing
brain-driven saliency mechanisms that prioritize certain elements over others.
In this investigation, we present an intelligent system that combines a
drowsiness detection system for drivers with a scene comprehension pipeline
based on saliency. To achieve this, we have implemented a specialized 3D deep
network for semantic segmentation, which has been pretrained and tailored for
processing the frames captured by an automotive-grade external camera. The
proposed pipeline was hosted on an embedded platform utilizing the STA1295
core, featuring ARM A7 dual-cores, and embeds an hardware accelerator.
Additionally, we employ an innovative biosensor embedded on the car steering
wheel to monitor the driver drowsiness, gathering the PhotoPlethysmoGraphy
(PPG) signal of the driver. A dedicated 1D temporal deep convolutional network
has been devised to classify the collected PPG time-series, enabling us to
assess the driver level of attentiveness. Ultimately, we compare the determined
attention level of the driver with the corresponding saliency-based scene
classification to evaluate the overall safety level. The efficacy of the
proposed pipeline has been validated through extensive experimental results.
Related papers
- Classification of Safety Driver Attention During Autonomous Vehicle
Operation [11.33083039877258]
This paper introduces a dual-source approach integrating data from an infrared camera facing the vehicle operator and vehicle perception systems.
The proposed system effectively determines a metric for the attention levels of the vehicle operator, enabling interventions such as warnings or reducing autonomous functionality as appropriate.
arXiv Detail & Related papers (2023-10-17T22:04:42Z) - Car-Driver Drowsiness Assessment through 1D Temporal Convolutional
Networks [7.455416595124159]
Recently, the scientific progress of Advanced Driver Assistance System solutions has played a key role in enhancing the overall safety of driving.
Recent reports confirmed a rising number of accidents caused by drowsiness or lack of attentiveness.
This integrated system enables near real-time classification of driver drowsiness, yielding remarkable accuracy levels of approximately 96%.
arXiv Detail & Related papers (2023-07-27T10:59:12Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - 3D Object Detection for Autonomous Driving: A Comprehensive Survey [48.30753402458884]
3D object detection, which intelligently predicts the locations, sizes, and categories of the critical 3D objects near an autonomous vehicle, is an important part of a perception system.
This paper reviews the advances in 3D object detection for autonomous driving.
arXiv Detail & Related papers (2022-06-19T19:43:11Z) - Where and What: Driver Attention-based Object Detection [13.5947650184579]
We bridge the gap between pixel-level and object-level attention prediction.
Our framework achieves competitive state-of-the-art performance on both pixel-level and object-level.
arXiv Detail & Related papers (2022-04-26T08:38:22Z) - The Multimodal Driver Monitoring Database: A Naturalistic Corpus to
Study Driver Attention [44.94118128276982]
A smart vehicle should be able to monitor the actions and behaviors of the human driver to provide critical warnings or intervene when necessary.
Recent advancements in deep learning and computer vision have shown great promise in monitoring human behaviors and activities.
A vast amount of in-domain data is required to train models that provide high performance in predicting driving related tasks.
arXiv Detail & Related papers (2020-12-23T16:37:17Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Driver Drowsiness Classification Based on Eye Blink and Head Movement
Features Using the k-NN Algorithm [8.356765961526955]
This work is to extend the driver drowsiness detection in vehicles using signals of a driver monitoring camera.
For this purpose, 35 features related to the driver's eye blinking behavior and head movements are extracted in driving simulator experiments.
A concluding analysis of the best performing feature sets yields valuable insights about the influence of drowsiness on the driver's blink behavior and head movements.
arXiv Detail & Related papers (2020-09-28T12:37:38Z) - A Survey and Tutorial of EEG-Based Brain Monitoring for Driver State
Analysis [164.93739293097605]
EEG is proven to be one of the most effective methods for driver state monitoring and human error detection.
This paper discusses EEG-based driver state detection systems and their corresponding analysis algorithms over the last three decades.
It is concluded that the current EEG-based driver state monitoring algorithms are promising for safety applications.
arXiv Detail & Related papers (2020-08-25T18:21:35Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.