MetaTra: Meta-Learning for Generalized Trajectory Prediction in Unseen
Domain
- URL: http://arxiv.org/abs/2402.08221v1
- Date: Tue, 13 Feb 2024 05:25:37 GMT
- Title: MetaTra: Meta-Learning for Generalized Trajectory Prediction in Unseen
Domain
- Authors: Xiaohe Li, Feilong Huang, Zide Fan, Fangli Mou, Yingyan Hou, Chen
Qian, Lijie Wen
- Abstract summary: Trajectory prediction has garnered widespread attention in different fields, such as autonomous driving and robotic navigation.
We propose a novel meta-learning-based trajectory prediction method called MetaTra.
We show that MetaTra not only surpasses other state-of-the-art methods but also exhibits plug-and-play capabilities.
- Score: 18.8641856367611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory prediction has garnered widespread attention in different fields,
such as autonomous driving and robotic navigation. However, due to the
significant variations in trajectory patterns across different scenarios,
models trained in known environments often falter in unseen ones. To learn a
generalized model that can directly handle unseen domains without requiring any
model updating, we propose a novel meta-learning-based trajectory prediction
method called MetaTra. This approach incorporates a Dual Trajectory Transformer
(Dual-TT), which enables a thorough exploration of the individual intention and
the interactions within group motion patterns in diverse scenarios. Building on
this, we propose a meta-learning framework to simulate the generalization
process between source and target domains. Furthermore, to enhance the
stability of our prediction outcomes, we propose a Serial and Parallel Training
(SPT) strategy along with a feature augmentation method named MetaMix.
Experimental results on several real-world datasets confirm that MetaTra not
only surpasses other state-of-the-art methods but also exhibits plug-and-play
capabilities, particularly in the realm of domain generalization.
Related papers
- Context-Enhanced Multi-View Trajectory Representation Learning: Bridging the Gap through Self-Supervised Models [27.316692263196277]
MVTraj is a novel multi-view modeling method for trajectory representation learning.
It integrates diverse contextual knowledge, from GPS to road network and points-of-interest to provide a more comprehensive understanding of trajectory data.
Extensive experiments on real-world datasets demonstrate that MVTraj significantly outperforms existing baselines in tasks associated with various spatial views.
arXiv Detail & Related papers (2024-10-17T03:56:12Z) - A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - GM-DF: Generalized Multi-Scenario Deepfake Detection [49.072106087564144]
Existing face forgery detection usually follows the paradigm of training models in a single domain.
In this paper, we elaborately investigate the generalization capacity of deepfake detection models when jointly trained on multiple face forgery detection datasets.
arXiv Detail & Related papers (2024-06-28T17:42:08Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Meta-SysId: A Meta-Learning Approach for Simultaneous Identification and
Prediction [34.83805457857297]
We propose Meta-SysId, a meta-learning approach to model systems governed by common but unknown laws and that differentiate themselves by their context.
We test Meta-SysId on regression, time-series prediction, model-based control, and real-world traffic prediction domains, empirically finding it outperforms or is competitive with meta-learning baselines.
arXiv Detail & Related papers (2022-06-01T18:04:22Z) - LatentFormer: Multi-Agent Transformer-Based Interaction Modeling and
Trajectory Prediction [12.84508682310717]
We propose LatentFormer, a transformer-based model for predicting future vehicle trajectories.
We evaluate the proposed method on the nuScenes benchmark dataset and show that our approach achieves state-of-the-art performance and improves upon trajectory metrics by up to 40%.
arXiv Detail & Related papers (2022-03-03T17:44:58Z) - Improving the Generalization of Meta-learning on Unseen Domains via
Adversarial Shift [3.1219977244201056]
We propose a model-agnostic shift layer to learn how to simulate the domain shift and generate pseudo tasks.
Based on the pseudo tasks, the meta-learning model can learn cross-domain meta-knowledge, which can generalize well on unseen domains.
arXiv Detail & Related papers (2021-07-23T07:29:30Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.