Remember Intentions: Retrospective-Memory-based Trajectory Prediction
- URL: http://arxiv.org/abs/2203.11474v1
- Date: Tue, 22 Mar 2022 05:59:33 GMT
- Title: Remember Intentions: Retrospective-Memory-based Trajectory Prediction
- Authors: Chenxin Xu, Weibo Mao, Wenjun Zhang, Siheng Chen
- Abstract summary: We propose MemoNet, an instance-based approach that predicts the movement intentions of agents by looking for similar scenarios in the training data.
Experiments show that the proposed MemoNet improves the FDE by 20.3%/10.2%/28.3% from the previous best method on SDD/ETH-UCY/NBA datasets.
- Score: 31.25007169374468
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To realize trajectory prediction, most previous methods adopt the
parameter-based approach, which encodes all the seen past-future instance pairs
into model parameters. However, in this way, the model parameters come from all
seen instances, which means a huge amount of irrelevant seen instances might
also involve in predicting the current situation, disturbing the performance.
To provide a more explicit link between the current situation and the seen
instances, we imitate the mechanism of retrospective memory in neuropsychology
and propose MemoNet, an instance-based approach that predicts the movement
intentions of agents by looking for similar scenarios in the training data. In
MemoNet, we design a pair of memory banks to explicitly store representative
instances in the training set, acting as prefrontal cortex in the neural
system, and a trainable memory addresser to adaptively search a current
situation with similar instances in the memory bank, acting like basal ganglia.
During prediction, MemoNet recalls previous memory by using the memory
addresser to index related instances in the memory bank. We further propose a
two-step trajectory prediction system, where the first step is to leverage
MemoNet to predict the destination and the second step is to fulfill the whole
trajectory according to the predicted destinations. Experiments show that the
proposed MemoNet improves the FDE by 20.3%/10.2%/28.3% from the previous best
method on SDD/ETH-UCY/NBA datasets. Experiments also show that our MemoNet has
the ability to trace back to specific instances during prediction, promoting
more interpretability.
Related papers
- Quantifying Memory Utilization with Effective State-Size [73.52115209375343]
We develop a measure of textitmemory utilization'<n>This metric is tailored to the fundamental class of systems with textitinput-invariant and textitinput-varying linear operators
arXiv Detail & Related papers (2025-04-28T08:12:30Z) - Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction [11.99118889081249]
We propose a Pattern-Matching Dynamic Memory Network (PM-DMNet) for traffic prediction.
PM-DMNet employs a novel dynamic memory network to capture traffic pattern features with only O(N) complexity.
The proposed model is superior to existing benchmarks.
arXiv Detail & Related papers (2024-08-12T15:12:30Z) - Causal Estimation of Memorisation Profiles [58.20086589761273]
Understanding memorisation in language models has practical and societal implications.
Memorisation is the causal effect of training with an instance on the model's ability to predict that instance.
This paper proposes a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - Uncovering the human motion pattern: Pattern Memory-based Diffusion
Model for Trajectory Prediction [45.77348842004666]
Motion Pattern Priors Memory Network is a memory-based method to uncover latent motion patterns in human behavior.
We introduce an addressing mechanism to retrieve the matched pattern and the potential target distributions for each prediction from the memory bank.
Experiments validate the effectiveness of our approach, achieving state-of-the-art trajectory prediction accuracy.
arXiv Detail & Related papers (2024-01-05T17:39:52Z) - Memory-and-Anticipation Transformer for Online Action Understanding [52.24561192781971]
We propose a novel memory-anticipation-based paradigm to model an entire temporal structure, including the past, present, and future.
We present Memory-and-Anticipation Transformer (MAT), a memory-anticipation-based approach, to address the online action detection and anticipation tasks.
arXiv Detail & Related papers (2023-08-15T17:34:54Z) - Memory-Based Meta-Learning on Non-Stationary Distributions [29.443692147512742]
Memory-based meta-learning is a technique for approximating Bayes-optimal predictors.
We show that memory-based neural models, including Transformers, LSTMs, and RNNs can learn to accurately approximate known Bayes-optimal algorithms.
arXiv Detail & Related papers (2023-02-06T19:08:59Z) - On the Relationship Between Variational Inference and Auto-Associative
Memory [68.8204255655161]
We study how different neural network approaches to variational inference can be applied in this framework.
We evaluate the obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them with other associative memory models.
arXiv Detail & Related papers (2022-10-14T14:18:47Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Joint Forecasting of Panoptic Segmentations with Difference Attention [72.03470153917189]
We study a new panoptic segmentation forecasting model that jointly forecasts all object instances in a scene.
We evaluate the proposed model on the Cityscapes and AIODrive datasets.
arXiv Detail & Related papers (2022-04-14T17:59:32Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - MANTRA: Memory Augmented Networks for Multiple Trajectory Prediction [26.151761714896118]
We address the problem of multimodal trajectory prediction exploiting a Memory Augmented Neural Network.
Our method learns past and future trajectory embeddings using recurrent neural networks and exploits an associative external memory to store and retrieve such embeddings.
Trajectory prediction is then performed by decoding in-memory future encodings conditioned with the observed past.
arXiv Detail & Related papers (2020-06-05T09:49:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.