Driver2vec: Driver Identification from Automotive Data
- URL: http://arxiv.org/abs/2102.05234v1
- Date: Wed, 10 Feb 2021 03:09:13 GMT
- Title: Driver2vec: Driver Identification from Automotive Data
- Authors: Jingbo Yang, Ruge Zhao, Meixian Zhu, David Hallac, Jaka Sodnik, Jure
Leskovec
- Abstract summary: Driver2vec is able to accurately identify the driver from a short 10-second interval of sensor data.
Driver2vec is trained on a dataset of 51 drivers provided by Nervtech.
- Score: 44.84876493736275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With increasing focus on privacy protection, alternative methods to identify
vehicle operator without the use of biometric identifiers have gained traction
for automotive data analysis. The wide variety of sensors installed on modern
vehicles enable autonomous driving, reduce accidents and improve vehicle
handling. On the other hand, the data these sensors collect reflect drivers'
habit. Drivers' use of turn indicators, following distance, rate of
acceleration, etc. can be transformed to an embedding that is representative of
their behavior and identity. In this paper, we develop a deep learning
architecture (Driver2vec) to map a short interval of driving data into an
embedding space that represents the driver's behavior to assist in driver
identification. We develop a custom model that leverages performance gains of
temporal convolutional networks, embedding separation power of triplet loss and
classification accuracy of gradient boosting decision trees. Trained on a
dataset of 51 drivers provided by Nervtech, Driver2vec is able to accurately
identify the driver from a short 10-second interval of sensor data, achieving
an average pairwise driver identification accuracy of 83.1% from this 10-second
interval, which is remarkably higher than performance obtained in previous
studies. We then analyzed performance of Driver2vec to show that its
performance is consistent across scenarios and that modeling choices are sound.
Related papers
- G-MEMP: Gaze-Enhanced Multimodal Ego-Motion Prediction in Driving [71.9040410238973]
We focus on inferring the ego trajectory of a driver's vehicle using their gaze data.
Next, we develop G-MEMP, a novel multimodal ego-trajectory prediction network that combines GPS and video input with gaze data.
The results show that G-MEMP significantly outperforms state-of-the-art methods in both benchmarks.
arXiv Detail & Related papers (2023-12-13T23:06:30Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - A Machine Learning Approach for Driver Identification Based on CAN-BUS
Sensor Data [0.0]
Driver identification is a momentous field of modern decorated vehicles in the controller area network (CAN-BUS) perspective.
Our aim is to identify the driver through supervised learning algorithms based on driving behavior analysis.
We have achieved statistically significant results in terms of accuracy in contrast to the baseline algorithm.
arXiv Detail & Related papers (2022-07-16T00:38:21Z) - Race Driver Evaluation at a Driving Simulator using a physical Model and
a Machine Learning Approach [1.9395755884693817]
We present a method to study and evaluate race drivers on a driver-in-the-loop simulator.
An overall performance score, a vehicle-trajectory score and a handling score are introduced to evaluate drivers.
We show that the neural network is accurate and robust with a root-mean-square error between 2-5% and can replace the optimisation based method.
arXiv Detail & Related papers (2022-01-27T07:32:32Z) - Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification [8.007800530105191]
We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
arXiv Detail & Related papers (2021-02-11T04:33:43Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - Driver Drowsiness Classification Based on Eye Blink and Head Movement
Features Using the k-NN Algorithm [8.356765961526955]
This work is to extend the driver drowsiness detection in vehicles using signals of a driver monitoring camera.
For this purpose, 35 features related to the driver's eye blinking behavior and head movements are extracted in driving simulator experiments.
A concluding analysis of the best performing feature sets yields valuable insights about the influence of drowsiness on the driver's blink behavior and head movements.
arXiv Detail & Related papers (2020-09-28T12:37:38Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - Driver Identification through Stochastic Multi-State Car-Following
Modeling [7.589491805669563]
Intra-driver and inter-driver heterogeneity has been confirmed to exist in human driving behaviors by many studies.
It is assumed that all drivers share a pool of driver states; under each state a car-following data sequence obeys a specific probability distribution in feature space.
Each driver has his/her own probability distribution over the states, called driver profile, which characterize the intradriver heterogeneity.
arXiv Detail & Related papers (2020-05-22T09:39:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.