Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification
- URL: http://arxiv.org/abs/2102.05843v1
- Date: Thu, 11 Feb 2021 04:33:43 GMT
- Title: Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification
- Authors: Sobhan Moosavi, Pravar D. Mahajan, Srinivasan Parthasarathy, Colleen
Saunders-Chukwu, and Rajiv Ramnath
- Abstract summary: We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
- Score: 8.007800530105191
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Identifying driving styles is the task of analyzing the behavior of drivers
in order to capture variations that will serve to discriminate different
drivers from each other. This task has become a prerequisite for a variety of
applications, including usage-based insurance, driver coaching, driver action
prediction, and even in designing autonomous vehicles; because driving style
encodes essential information needed by these applications. In this paper, we
present a deep-neural-network architecture, we term D-CRNN, for building
high-fidelity representations for driving style, that combine the power of
convolutional neural networks (CNN) and recurrent neural networks (RNN). Using
CNN, we capture semantic patterns of driver behavior from trajectories (such as
a turn or a braking event). We then find temporal dependencies between these
semantic patterns using RNN to encode driving style. We demonstrate the
effectiveness of these techniques for driver identification by learning driving
style through extensive experiments conducted on several large, real-world
datasets, and comparing the results with the state-of-the-art deep-learning and
non-deep-learning solutions. These experiments also demonstrate a useful
example of bias removal, by presenting how we preprocess the input data by
sampling dissimilar trajectories for each driver to prevent spatial
memorization. Finally, this paper presents an analysis of the contribution of
different attributes for driver identification; we find that engine RPM, Speed,
and Acceleration are the best combination of features.
Related papers
- Driver Fatigue Prediction using Randomly Activated Neural Networks for Smart Ridesharing Platforms [0.21847754147782888]
Drivers in ridesharing platforms exhibit cognitive atrophy and fatigue as they accept ride offers along the day.
This paper proposes a novel Dynamic Satisficing (DDS) to model and predict driver's ride decisions during a given shift.
Using both simulation experiments as well as on real Chicago taxi dataset, this paper demonstrates the improved performance of the proposed approach.
arXiv Detail & Related papers (2024-04-16T16:04:11Z) - RainSD: Rain Style Diversification Module for Image Synthesis
Enhancement using Feature-Level Style Distribution [5.500457283114346]
This paper presents a synthetic road dataset with sensor blockage generated from real road dataset BDD100K.
Using this dataset, the degradation of diverse multi-task networks for autonomous driving has been thoroughly evaluated and analyzed.
The tendency of the performance degradation of deep neural network-based perception systems for autonomous vehicle has been analyzed in depth.
arXiv Detail & Related papers (2023-12-31T11:30:42Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - CARNet: A Dynamic Autoencoder for Learning Latent Dynamics in Autonomous
Driving Tasks [11.489187712465325]
An autonomous driving system should effectively use the information collected from the various sensors in order to form an abstract description of the world.
Deep learning models, such as autoencoders, can be used for that purpose, as they can learn compact latent representations from a stream of incoming data.
This work proposes CARNet, a Combined dynAmic autoencodeR NETwork architecture that utilizes an autoencoder combined with a recurrent neural network to learn the current latent representation.
arXiv Detail & Related papers (2022-05-18T04:15:42Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - Spatio-Temporal Look-Ahead Trajectory Prediction using Memory Neural
Network [6.065344547161387]
This paper attempts to solve the problem of Spatio-temporal look-ahead trajectory prediction using a novel recurrent neural network called the Memory Neuron Network.
The proposed model is computationally less intensive and has a simple architecture as compared to other deep learning models that utilize LSTMs and GRUs.
arXiv Detail & Related papers (2021-02-24T05:02:19Z) - Autonomous Navigation through intersections with Graph
ConvolutionalNetworks and Conditional Imitation Learning for Self-driving
Cars [10.080958939027363]
In autonomous driving, navigation through unsignaled intersections is a challenging task.
We propose a novel branched network G-CIL for the navigation policy learning.
Our end-to-end trainable neural network outperforms the baselines with higher success rate and shorter navigation time.
arXiv Detail & Related papers (2021-02-01T07:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.