PRANK: motion Prediction based on RANKing
- URL: http://arxiv.org/abs/2010.12007v2
- Date: Tue, 15 Jun 2021 09:39:33 GMT
- Title: PRANK: motion Prediction based on RANKing
- Authors: Yuriy Biktairov, Maxim Stebelev, Irina Rudenko, Oleh Shliazhko, Boris
Yangel
- Abstract summary: Predicting the motion of agents is one of the most critical problems in the autonomous driving domain.
We introduce the PRANK method, which produces the conditional distribution of agent's trajectories plausible in the given scene.
We evaluate PRANK on the in-house and Argoverse datasets, where it shows competitive results.
- Score: 4.4861975043227345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting the motion of agents such as pedestrians or human-driven vehicles
is one of the most critical problems in the autonomous driving domain. The
overall safety of driving and the comfort of a passenger directly depend on its
successful solution. The motion prediction problem also remains one of the most
challenging problems in autonomous driving engineering, mainly due to high
variance of the possible agent's future behavior given a situation. The two
phenomena responsible for the said variance are the multimodality caused by the
uncertainty of the agent's intent (e.g., turn right or move forward) and
uncertainty in the realization of a given intent (e.g., which lane to turn
into). To be useful within a real-time autonomous driving pipeline, a motion
prediction system must provide efficient ways to describe and quantify this
uncertainty, such as computing posterior modes and their probabilities or
estimating density at the point corresponding to a given trajectory. It also
should not put substantial density on physically impossible trajectories, as
they can confuse the system processing the predictions. In this paper, we
introduce the PRANK method, which satisfies these requirements. PRANK takes
rasterized bird-eye images of agent's surroundings as an input and extracts
features of the scene with a convolutional neural network. It then produces the
conditional distribution of agent's trajectories plausible in the given scene.
The key contribution of PRANK is a way to represent that distribution using
nearest-neighbor methods in latent trajectory space, which allows for efficient
inference in real time. We evaluate PRANK on the in-house and Argoverse
datasets, where it shows competitive results.
Related papers
- Building Real-time Awareness of Out-of-distribution in Trajectory Prediction for Autonomous Vehicles [8.398221841050349]
Trajectory prediction describes the motions of surrounding moving obstacles for an autonomous vehicle.
In this paper, we aim to establish real-time awareness of out-of-distribution in trajectory prediction for autonomous vehicles.
Our solutions are lightweight and can handle the occurrence of out-of-distribution at any time during trajectory prediction inference.
arXiv Detail & Related papers (2024-09-25T18:43:58Z) - QuAD: Query-based Interpretable Neural Motion Planning for Autonomous Driving [33.609780917199394]
Self-driving vehicles must understand its environment to determine appropriate action.
Traditional systems rely on object detection to find agents in the scene.
We present a unified, interpretable, and efficient autonomy framework that moves away from cascading modules that first perceive occupancy relevant-temporal autonomy.
arXiv Detail & Related papers (2024-04-01T21:11:43Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion [88.45326906116165]
We present a new framework to formulate the trajectory prediction task as a reverse process of motion indeterminacy diffusion (MID)
We encode the history behavior information and the social interactions as a state embedding and devise a Transformer-based diffusion model to capture the temporal dependencies of trajectories.
Experiments on the human trajectory prediction benchmarks including the Stanford Drone and ETH/UCY datasets demonstrate the superiority of our method.
arXiv Detail & Related papers (2022-03-25T16:59:08Z) - Trajectory Forecasting from Detection with Uncertainty-Aware Motion
Encoding [121.66374635092097]
Trajectories obtained from object detection and tracking are inevitably noisy.
We propose a trajectory predictor directly based on detection results without relying on explicitly formed trajectories.
arXiv Detail & Related papers (2022-02-03T09:09:56Z) - SGCN:Sparse Graph Convolution Network for Pedestrian Trajectory
Prediction [64.16212996247943]
We present a Sparse Graph Convolution Network(SGCN) for pedestrian trajectory prediction.
Specifically, the SGCN explicitly models the sparse directed interaction with a sparse directed spatial graph to capture adaptive interaction pedestrians.
visualizations indicate that our method can capture adaptive interactions between pedestrians and their effective motion tendencies.
arXiv Detail & Related papers (2021-04-04T03:17:42Z) - Pedestrian Motion State Estimation From 2D Pose [3.189006905282788]
Traffic violation and the flexible and changeable nature of pedestrians make it more difficult to predict pedestrian behavior or intention.
In combination with pedestrian motion state and other influencing factors, pedestrian intention can be predicted to avoid unnecessary accidents.
This paper verifies the proposed algorithm on the JAAD public dataset, and the accuracy is improved by 11.6% compared with the existing method.
arXiv Detail & Related papers (2021-02-27T07:00:06Z) - Attentional-GCNN: Adaptive Pedestrian Trajectory Prediction towards
Generic Autonomous Vehicle Use Cases [10.41902340952981]
We propose a novel Graph Convolutional Neural Network (GCNN)-based approach, Attentional-GCNN, which aggregates information of implicit interaction between pedestrians in a crowd by assigning attention weight in edges of the graph.
We show our proposed method achieves an improvement over the state of art by 10% Average Displacement Error (ADE) and 12% Final Displacement Error (FDE) with fast inference speeds.
arXiv Detail & Related papers (2020-11-23T03:13:26Z) - The Importance of Prior Knowledge in Precise Multimodal Prediction [71.74884391209955]
Roads have well defined geometries, topologies, and traffic rules.
In this paper we propose to incorporate structured priors as a loss function.
We demonstrate the effectiveness of our approach on real-world self-driving datasets.
arXiv Detail & Related papers (2020-06-04T03:56:11Z) - TPNet: Trajectory Proposal Network for Motion Prediction [81.28716372763128]
Trajectory Proposal Network (TPNet) is a novel two-stage motion prediction framework.
TPNet first generates a candidate set of future trajectories as hypothesis proposals, then makes the final predictions by classifying and refining the proposals.
Experiments on four large-scale trajectory prediction datasets, show that TPNet achieves the state-of-the-art results both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-04-26T00:01:49Z) - Scenario-Transferable Semantic Graph Reasoning for Interaction-Aware
Probabilistic Prediction [29.623692599892365]
Accurately predicting the possible behaviors of traffic participants is an essential capability for autonomous vehicles.
We propose a novel generic representation for various driving environments by taking the advantage of semantics and domain knowledge.
arXiv Detail & Related papers (2020-04-07T00:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.