DeepTake: Prediction of Driver Takeover Behavior using Multimodal Data
- URL: http://arxiv.org/abs/2012.15441v2
- Date: Fri, 15 Jan 2021 17:30:50 GMT
- Title: DeepTake: Prediction of Driver Takeover Behavior using Multimodal Data
- Authors: Erfan Pakdamanian, Shili Sheng, Sonia Baee, Seongkook Heo, Sarit
Kraus, Lu Feng
- Abstract summary: We present DeepTake, a novel deep neural network-based framework that predicts multiple aspects of takeover behavior.
Using features from vehicle data, driver biometrics, and subjective measurements, DeepTake predicts the driver's intention, time, and quality of takeover.
Results show that DeepTake reliably predicts the takeover intention, time, and quality, with an accuracy of 96%, 93%, and 83%, respectively.
- Score: 17.156611944404883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated vehicles promise a future where drivers can engage in non-driving
tasks without hands on the steering wheels for a prolonged period.
Nevertheless, automated vehicles may still need to occasionally hand the
control back to drivers due to technology limitations and legal requirements.
While some systems determine the need for driver takeover using driver context
and road condition to initiate a takeover request, studies show that the driver
may not react to it. We present DeepTake, a novel deep neural network-based
framework that predicts multiple aspects of takeover behavior to ensure that
the driver is able to safely take over the control when engaged in non-driving
tasks. Using features from vehicle data, driver biometrics, and subjective
measurements, DeepTake predicts the driver's intention, time, and quality of
takeover. We evaluate DeepTake performance using multiple evaluation metrics.
Results show that DeepTake reliably predicts the takeover intention, time, and
quality, with an accuracy of 96%, 93%, and 83%, respectively. Results also
indicate that DeepTake outperforms previous state-of-the-art methods on
predicting driver takeover time and quality. Our findings have implications for
the algorithm development of driver monitoring and state detection.
Related papers
- Federated Learning for Drowsiness Detection in Connected Vehicles [0.19116784879310028]
Driver monitoring systems can assist in determining the driver's state.
Driver drowsiness detection presents a potential solution.
transmitting the data to a central machine for model training is impractical due to the large data size and privacy concerns.
We propose a federated learning framework for drowsiness detection within a vehicular network, leveraging the YawDD dataset.
arXiv Detail & Related papers (2024-05-06T09:39:13Z) - DriveCoT: Integrating Chain-of-Thought Reasoning with End-to-End Driving [81.04174379726251]
This paper collects a comprehensive end-to-end driving dataset named DriveCoT.
It contains sensor data, control decisions, and chain-of-thought labels to indicate the reasoning process.
We propose a baseline model called DriveCoT-Agent, trained on our dataset, to generate chain-of-thought predictions and final decisions.
arXiv Detail & Related papers (2024-03-25T17:59:01Z) - Context-Aware Quantitative Risk Assessment Machine Learning Model for
Drivers Distraction [0.0]
Multi-Class Driver Distraction Risk Assessment (MDDRA) model considers the vehicle, driver, and environmental data during a journey.
MDDRA categorises the driver on a risk matrix as safe, careless, or dangerous.
We apply machine learning techniques to classify and predict driver distraction according to severity levels.
arXiv Detail & Related papers (2024-02-20T23:20:36Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Race Driver Evaluation at a Driving Simulator using a physical Model and
a Machine Learning Approach [1.9395755884693817]
We present a method to study and evaluate race drivers on a driver-in-the-loop simulator.
An overall performance score, a vehicle-trajectory score and a handling score are introduced to evaluate drivers.
We show that the neural network is accurate and robust with a root-mean-square error between 2-5% and can replace the optimisation based method.
arXiv Detail & Related papers (2022-01-27T07:32:32Z) - Driver2vec: Driver Identification from Automotive Data [44.84876493736275]
Driver2vec is able to accurately identify the driver from a short 10-second interval of sensor data.
Driver2vec is trained on a dataset of 51 drivers provided by Nervtech.
arXiv Detail & Related papers (2021-02-10T03:09:13Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - Driver Drowsiness Classification Based on Eye Blink and Head Movement
Features Using the k-NN Algorithm [8.356765961526955]
This work is to extend the driver drowsiness detection in vehicles using signals of a driver monitoring camera.
For this purpose, 35 features related to the driver's eye blinking behavior and head movements are extracted in driving simulator experiments.
A concluding analysis of the best performing feature sets yields valuable insights about the influence of drowsiness on the driver's blink behavior and head movements.
arXiv Detail & Related papers (2020-09-28T12:37:38Z) - A Survey and Tutorial of EEG-Based Brain Monitoring for Driver State
Analysis [164.93739293097605]
EEG is proven to be one of the most effective methods for driver state monitoring and human error detection.
This paper discusses EEG-based driver state detection systems and their corresponding analysis algorithms over the last three decades.
It is concluded that the current EEG-based driver state monitoring algorithms are promising for safety applications.
arXiv Detail & Related papers (2020-08-25T18:21:35Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.