CASPNet++: Joint Multi-Agent Motion Prediction
- URL: http://arxiv.org/abs/2308.07751v1
- Date: Tue, 15 Aug 2023 13:09:33 GMT
- Title: CASPNet++: Joint Multi-Agent Motion Prediction
- Authors: Maximilian Sch\"afer, Kun Zhao and Anton Kummert
- Abstract summary: We present CASPNet++, an improved Context-Aware Scene Prediction Network (ASPNet)
In this work, we focus on further enhancing the interaction modeling and scene understanding to support the joint prediction of all road users in a scene.
We demonstrate the scalability of CASPNet++ in utilizing diverse environmental input sources such as HD maps, Radar detection, and Lidar segmentation.
- Score: 2.041875623674907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prediction of road users' future motion is a critical task in supporting
advanced driver-assistance systems (ADAS). It plays an even more crucial role
for autonomous driving (AD) in enabling the planning and execution of safe
driving maneuvers. Based on our previous work, Context-Aware Scene Prediction
Network (CASPNet), an improved system, CASPNet++, is proposed. In this work, we
focus on further enhancing the interaction modeling and scene understanding to
support the joint prediction of all road users in a scene using spatiotemporal
grids to model future occupancy. Moreover, an instance-based output head is
introduced to provide multi-modal trajectories for agents of interest. In
extensive quantitative and qualitative analysis, we demonstrate the scalability
of CASPNet++ in utilizing and fusing diverse environmental input sources such
as HD maps, Radar detection, and Lidar segmentation. Tested on the
urban-focused prediction dataset nuScenes, CASPNet++ reaches state-of-the-art
performance. The model has been deployed in a testing vehicle, running in
real-time with moderate computational resources.
Related papers
- Planning with Adaptive World Models for Autonomous Driving [50.4439896514353]
Motion planners (MPs) are crucial for safe navigation in complex urban environments.
nuPlan, a recently released MP benchmark, addresses this limitation by augmenting real-world driving logs with closed-loop simulation logic.
We present AdaptiveDriver, a model-predictive control (MPC) based planner that unrolls different world models conditioned on BehaviorNet's predictions.
arXiv Detail & Related papers (2024-06-15T18:53:45Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Context-Aware Timewise VAEs for Real-Time Vehicle Trajectory Prediction [4.640835690336652]
We present ContextVAE, a context-aware approach for multi-modal vehicle trajectory prediction.
Our approach takes into account both the social features exhibited by agents on the scene and the physical environment constraints.
In all tested datasets, ContextVAE models are fast to train and provide high-quality multi-modal predictions in real-time.
arXiv Detail & Related papers (2023-02-21T18:42:24Z) - Exploring Attention GAN for Vehicle Motion Prediction [2.887073662645855]
We study the influence of attention in generative models for motion prediction, considering both physical and social context.
We validate our method using the Argoverse Motion Forecasting Benchmark 1.1, achieving competitive unimodal results.
arXiv Detail & Related papers (2022-09-26T13:18:32Z) - Predicting Future Occupancy Grids in Dynamic Environment with
Spatio-Temporal Learning [63.25627328308978]
We propose a-temporal prediction network pipeline to generate future occupancy predictions.
Compared to current SOTA, our approach predicts occupancy for a longer horizon of 3 seconds.
We publicly release our grid occupancy dataset based on nulis to support further research.
arXiv Detail & Related papers (2022-05-06T13:45:32Z) - Deep Interactive Motion Prediction and Planning: Playing Games with
Motion Prediction Models [162.21629604674388]
This work presents a game-theoretic Model Predictive Controller (MPC) that uses a novel interactive multi-agent neural network policy as part of its predictive model.
Fundamental to the success of our method is the design of a novel multi-agent policy network that can steer a vehicle given the state of the surrounding agents and the map information.
arXiv Detail & Related papers (2022-04-05T17:58:18Z) - Context-Aware Scene Prediction Network (CASPNet) [3.390468002706074]
We jointly learn and predict the motion of all road users in a scene using a novel convolutional neural network (CNN) and recurrent neural network (RNN) based architecture.
Our approach reaches state-of-the-art results in the prediction benchmark.
arXiv Detail & Related papers (2022-01-18T12:52:01Z) - LaPred: Lane-Aware Prediction of Multi-Modal Future Trajectories of
Dynamic Agents [10.869902339190949]
We propose a novel prediction model, referred to as the lane-aware prediction (LaPred) network.
LaPred uses the instance-level lane entities extracted from a semantic map to predict the multi-modal future trajectories.
The experiments conducted on the public nuScenes and Argoverse dataset demonstrate that the proposed LaPred method significantly outperforms the existing prediction models.
arXiv Detail & Related papers (2021-04-01T04:33:36Z) - Instance-Aware Predictive Navigation in Multi-Agent Environments [93.15055834395304]
We propose an Instance-Aware Predictive Control (IPC) approach, which forecasts interactions between agents as well as future scene structures.
We adopt a novel multi-instance event prediction module to estimate the possible interaction among agents in the ego-centric view.
We design a sequential action sampling strategy to better leverage predicted states on both scene-level and instance-level.
arXiv Detail & Related papers (2021-01-14T22:21:25Z) - TPNet: Trajectory Proposal Network for Motion Prediction [81.28716372763128]
Trajectory Proposal Network (TPNet) is a novel two-stage motion prediction framework.
TPNet first generates a candidate set of future trajectories as hypothesis proposals, then makes the final predictions by classifying and refining the proposals.
Experiments on four large-scale trajectory prediction datasets, show that TPNet achieves the state-of-the-art results both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-04-26T00:01:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.