FIMP: Future Interaction Modeling for Multi-Agent Motion Prediction
- URL: http://arxiv.org/abs/2401.16189v1
- Date: Mon, 29 Jan 2024 14:41:55 GMT
- Title: FIMP: Future Interaction Modeling for Multi-Agent Motion Prediction
- Authors: Sungmin Woo, Minjung Kim, Donghyeong Kim, Sungjun Jang, Sangyoun Lee
- Abstract summary: We propose Future Interaction modeling for Motion Prediction (FIMP), which captures potential future interactions in an end-to-end manner.
Experiments show that our future interaction modeling improves the performance remarkably, leading to superior performance on the Argoverse motion forecasting benchmark.
- Score: 18.10147252674138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent motion prediction is a crucial concern in autonomous driving, yet
it remains a challenge owing to the ambiguous intentions of dynamic agents and
their intricate interactions. Existing studies have attempted to capture
interactions between road entities by using the definite data in history
timesteps, as future information is not available and involves high
uncertainty. However, without sufficient guidance for capturing future states
of interacting agents, they frequently produce unrealistic trajectory overlaps.
In this work, we propose Future Interaction modeling for Motion Prediction
(FIMP), which captures potential future interactions in an end-to-end manner.
FIMP adopts a future decoder that implicitly extracts the potential future
information in an intermediate feature-level, and identifies the interacting
entity pairs through future affinity learning and top-k filtering strategy.
Experiments show that our future interaction modeling improves the performance
remarkably, leading to superior performance on the Argoverse motion forecasting
benchmark.
Related papers
- Gated Temporal Diffusion for Stochastic Long-Term Dense Anticipation [17.4088244981231]
Long-term action anticipation has become an important task for many applications such as autonomous driving and human-robot interaction.
We propose a novel Gated Temporal Diffusion (GTD) network that models the uncertainty of both the observation and the future predictions.
Our model achieves state-of-the-art results on the Breakfast, Assembly101 and 50Salads datasets in both deterministic settings.
arXiv Detail & Related papers (2024-07-16T17:48:05Z) - Neural Interaction Energy for Multi-Agent Trajectory Prediction [55.098754835213995]
We introduce a framework called Multi-Agent Trajectory prediction via neural interaction Energy (MATE)
MATE assesses the interactive motion of agents by employing neural interaction energy.
To bolster temporal stability, we introduce two constraints: inter-agent interaction constraint and intra-agent motion constraint.
arXiv Detail & Related papers (2024-04-25T12:47:47Z) - Interpretable Long Term Waypoint-Based Trajectory Prediction Model [1.4778851751964937]
We study the impact of adding a long-term goal on the performance of a trajectory prediction framework.
We present an interpretable long term waypoint-driven prediction framework (WayDCM)
arXiv Detail & Related papers (2023-12-11T09:10:22Z) - FFINet: Future Feedback Interaction Network for Motion Forecasting [46.247396728154904]
We propose a novel Future Feedback Interaction Network (FFINet) to aggregate features the current observations and potential future interactions for trajectory prediction.
Our FFINet achieves the state-of-the-art performance on Argoverse 1 and Argoverse 2 motion forecasting benchmarks.
arXiv Detail & Related papers (2023-11-08T07:57:29Z) - BiFF: Bi-level Future Fusion with Polyline-based Coordinate for
Interactive Trajectory Prediction [23.895217477379653]
We propose Bi-level Future Fusion (BiFF) to capture future interactions between interactive agents.
Concretely, BiFF fuses the high-level future intentions followed by low-level future behaviors.
BiFF achieves state-of-the-art performance on the interactive prediction benchmark of Open Motion dataset.
arXiv Detail & Related papers (2023-06-25T08:11:43Z) - FJMP: Factorized Joint Multi-Agent Motion Prediction over Learned
Directed Acyclic Interaction Graphs [8.63314005149641]
We propose FJMP, a Factorized Joint Motion Prediction framework for interactive driving scenarios.
FJMP produces more accurate and scene-consistent joint trajectory predictions than non-factorized approaches.
FJMP ranks 1st on the multi-agent test leaderboard of the INTERACTION dataset.
arXiv Detail & Related papers (2022-11-27T18:59:17Z) - Dyadic Human Motion Prediction [119.3376964777803]
We introduce a motion prediction framework that explicitly reasons about the interactions of two observed subjects.
Specifically, we achieve this by introducing a pairwise attention mechanism that models the mutual dependencies in the motion history of the two subjects.
This allows us to preserve the long-term motion dynamics in a more realistic way and more robustly predict unusual and fast-paced movements.
arXiv Detail & Related papers (2021-12-01T10:30:40Z) - You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory
Prediction [52.442129609979794]
Recent deep learning approaches for trajectory prediction show promising performance.
It remains unclear which features such black-box models actually learn to use for making predictions.
This paper proposes a procedure that quantifies the contributions of different cues to model performance.
arXiv Detail & Related papers (2021-10-11T14:24:15Z) - End-to-end Contextual Perception and Prediction with Interaction
Transformer [79.14001602890417]
We tackle the problem of detecting objects in 3D and forecasting their future motion in the context of self-driving.
To capture their spatial-temporal dependencies, we propose a recurrent neural network with a novel Transformer architecture.
Our model can be trained end-to-end, and runs in real-time.
arXiv Detail & Related papers (2020-08-13T14:30:12Z) - Implicit Latent Variable Model for Scene-Consistent Motion Forecasting [78.74510891099395]
In this paper, we aim to learn scene-consistent motion forecasts of complex urban traffic directly from sensor data.
We model the scene as an interaction graph and employ powerful graph neural networks to learn a distributed latent representation of the scene.
arXiv Detail & Related papers (2020-07-23T14:31:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.