A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN
- URL: http://arxiv.org/abs/2103.00801v1
- Date: Mon, 1 Mar 2021 06:47:29 GMT
- Title: A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN
- Authors: He Zhang, Zhixiong Nan, Tao Yang, Yifan Liu and Nanning Zheng
- Abstract summary: We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
- Score: 59.57221522897815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In autonomous driving, perceiving the driving behaviors of surrounding agents
is important for the ego-vehicle to make a reasonable decision. In this paper,
we propose a neural network model based on trajectories information for driving
behavior recognition. Unlike existing trajectory-based methods that recognize
the driving behavior using the hand-crafted features or directly encoding the
trajectory, our model involves a Multi-Scale Convolutional Neural Network
(MSCNN) module to automatically extract the high-level features which are
supposed to encode the rich spatial and temporal information. Given a
trajectory sequence of an agent as the input, firstly, the Bi-directional Long
Short Term Memory (Bi-LSTM) module and the MSCNN module respectively process
the input, generating two features, and then the two features are fused to
classify the behavior of the agent. We evaluate the proposed model on the
public BLVD dataset, achieving a satisfying performance.
Related papers
- Trajeglish: Traffic Modeling as Next-Token Prediction [67.28197954427638]
A longstanding challenge for self-driving development is simulating dynamic driving scenarios seeded from recorded driving logs.
We apply tools from discrete sequence modeling to model how vehicles, pedestrians and cyclists interact in driving scenarios.
Our model tops the Sim Agents Benchmark, surpassing prior work along the realism meta metric by 3.3% and along the interaction metric by 9.9%.
arXiv Detail & Related papers (2023-12-07T18:53:27Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - A Dynamic Temporal Self-attention Graph Convolutional Network for
Traffic Prediction [7.23135508361981]
This paper proposes a temporal self-attention graph convolutional network (DT-SGN) model which considers the adjacent matrix as a trainable attention score matrix.
Experiments demonstrate the superiority of our method over state-of-art model-driven model and data-driven models on real-world traffic datasets.
arXiv Detail & Related papers (2023-02-21T03:51:52Z) - IDM-Follower: A Model-Informed Deep Learning Method for Long-Sequence
Car-Following Trajectory Prediction [24.94160059351764]
Most car-following models are generative and only consider the inputs of the speed, position, and acceleration of the last time step.
We implement a novel structure with two independent encoders and a self-attention decoder that could sequentially predict the following trajectories.
Numerical experiments with multiple settings on simulation and NGSIM datasets show that the IDM-Follower can improve the prediction performance.
arXiv Detail & Related papers (2022-10-20T02:24:27Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - Bidirectional Interaction between Visual and Motor Generative Models
using Predictive Coding and Active Inference [68.8204255655161]
We propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories.
We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories.
arXiv Detail & Related papers (2021-04-19T09:41:31Z) - Spatio-Temporal Look-Ahead Trajectory Prediction using Memory Neural
Network [6.065344547161387]
This paper attempts to solve the problem of Spatio-temporal look-ahead trajectory prediction using a novel recurrent neural network called the Memory Neuron Network.
The proposed model is computationally less intensive and has a simple architecture as compared to other deep learning models that utilize LSTMs and GRUs.
arXiv Detail & Related papers (2021-02-24T05:02:19Z) - Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification [8.007800530105191]
We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
arXiv Detail & Related papers (2021-02-11T04:33:43Z) - PathGAN: Local Path Planning with Attentive Generative Adversarial
Networks [0.0]
We present a model capable of generating plausible paths from egocentric images for autonomous vehicles.
Our generative model comprises two neural networks: the feature extraction network (FEN) and path generation network (PGN)
We also introduce ETRIDriving, a dataset for autonomous driving in which the recorded sensor data are labeled with discrete high-level driving actions.
arXiv Detail & Related papers (2020-07-08T03:31:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.