TrajFM: A Vehicle Trajectory Foundation Model for Region and Task Transferability
- URL: http://arxiv.org/abs/2408.15251v1
- Date: Fri, 9 Aug 2024 07:40:44 GMT
- Title: TrajFM: A Vehicle Trajectory Foundation Model for Region and Task Transferability
- Authors: Yan Lin, Tonglong Wei, Zeyu Zhou, Haomin Wen, Jilin Hu, Shengnan Guo, Youfang Lin, Huaiyu Wan,
- Abstract summary: A desirable trajectory learning model should transfer between different regions and tasks without retraining.
We propose TrajFM, a vehicle trajectory foundation model that excels in both region and task transferability.
Experiments on two real-world vehicle trajectory datasets under various settings demonstrate the effectiveness of TrajFM.
- Score: 23.945687080195796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vehicle trajectories provide valuable movement information that supports various downstream tasks and powers real-world applications. A desirable trajectory learning model should transfer between different regions and tasks without retraining, thus improving computational efficiency and effectiveness with limited training data. However, a model's ability to transfer across regions is limited by the unique spatial features and POI arrangements of each region, which are closely linked to vehicle movement patterns and difficult to generalize. Additionally, achieving task transferability is challenging due to the differing generation schemes required for various tasks. Existing efforts towards transferability primarily involve learning embedding vectors for trajectories, which perform poorly in region transfer and still require retraining of prediction modules for task transfer. To address these challenges, we propose TrajFM, a vehicle trajectory foundation model that excels in both region and task transferability. For region transferability, we introduce STRFormer as the main learnable model within TrajFM. It integrates spatial, temporal, and POI modalities of trajectories to effectively manage variations in POI arrangements across regions and includes a learnable spatio-temporal Rotary position embedding module for handling spatial features. For task transferability, we propose a trajectory masking and recovery scheme. This scheme unifies the generation processes of various tasks into the masking and recovery of modalities and sub-trajectories, allowing TrajFM to be pre-trained once and transferred to different tasks without retraining. Experiments on two real-world vehicle trajectory datasets under various settings demonstrate the effectiveness of TrajFM. Code is available at https://anonymous.4open.science/r/TrajFM-30E4.
Related papers
- Enhancing Cross-task Transfer of Large Language Models via Activation Steering [75.41750053623298]
Cross-task in-context learning offers a direct solution for transferring knowledge across tasks.<n>We investigate whether cross-task transfer can be achieved via latent space steering without parameter updates or input expansion.<n>We propose a novel Cross-task Activation Steering Transfer framework that enables effective transfer by manipulating the model's internal activation states.
arXiv Detail & Related papers (2025-07-17T15:47:22Z) - TransferTraj: A Vehicle Trajectory Learning Model for Region and Task Transferability [32.061236211032714]
TransferTraj is a vehicle GPS trajectory learning model that excels in both region and task transferability.<n>For region transferability, we introduce RTTE as the main learnable module within TransferTraj.<n>For task transferability, we propose a task-transferable input-output scheme that unifies the input-output structure of different tasks.
arXiv Detail & Related papers (2025-05-19T03:40:34Z) - Solving Continual Offline RL through Selective Weights Activation on Aligned Spaces [52.649077293256795]
Continual offline reinforcement learning (CORL) has shown impressive ability in diffusion-based lifelong learning systems.
We propose Vector-Quantized Continual diffuser, named VQ-CD, to break the barrier of different spaces between various tasks.
arXiv Detail & Related papers (2024-10-21T07:13:45Z) - PTrajM: Efficient and Semantic-rich Trajectory Learning with Pretrained Trajectory-Mamba [22.622613591771152]
Vehicle trajectories provide crucial movement information for various real-world applications.
It is essential to develop a trajectory learning approach that efficiently extract rich semantic information, including movement and travel purposes.
We propose PTrajM, a novel method of efficient and semantic-rich vehicle trajectory learning.
arXiv Detail & Related papers (2024-08-09T07:48:51Z) - Pretrained Mobility Transformer: A Foundation Model for Human Mobility [11.713796525742405]
textbfPretrained textbfMobility textbfTransformer (PMT)
textbfMobility textbfTransformer (PMT)
textbfPretrained textbfMobility textbfTransformer (PMT)
arXiv Detail & Related papers (2024-05-29T00:07:22Z) - TrajCogn: Leveraging LLMs for Cognizing Movement Patterns and Travel Purposes from Trajectories [24.44686757572976]
S-temporal trajectories are crucial in various data mining tasks.
It is important to develop a versatile trajectory learning method that performs different tasks with high accuracy.
This is challenging due to limitations in model capacity and the quality and scale of trajectory datasets.
arXiv Detail & Related papers (2024-05-21T02:33:17Z) - UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation [34.918489559139715]
Universal Vehicle Trajectory (UVTM) is designed to support different tasks based on incomplete or sparse trajectories.
To handle sparse trajectories effectively, UVTM is pre-trained by reconstructing densely sampled trajectories from sparsely sampled ones.
arXiv Detail & Related papers (2024-02-11T15:49:50Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - Self-supervised Trajectory Representation Learning with Temporal
Regularities and Travel Semantics [30.9735101687326]
Trajectory Representation Learning (TRL) is a powerful tool for spatial-temporal data analysis and management.
Existing TRL works usually treat trajectories as ordinary sequence data, while some important spatial-temporal characteristics, such as temporal regularities and travel semantics, are not fully exploited.
We propose a novel Self-supervised trajectory representation learning framework with TemporAl Regularities and Travel semantics, namely START.
arXiv Detail & Related papers (2022-11-17T13:14:47Z) - Curriculum Reinforcement Learning using Optimal Transport via Gradual
Domain Adaptation [46.103426976842336]
Reinforcement Learning (CRL) aims to create a sequence of tasks, starting from easy ones and gradually learning towards difficult tasks.
In this work, we focus on the idea of framing CRL as Curriculums between a source (auxiliary) and a target task distribution.
Inspired by the insights from gradual domain adaptation in semi-supervised learning, we create a natural curriculum by breaking down the potentially large task distributional shift in CRL into smaller shifts.
arXiv Detail & Related papers (2022-10-18T22:33:33Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - Omni-Training for Data-Efficient Deep Learning [80.28715182095975]
Recent advances reveal that a properly pre-trained model endows an important property: transferability.
A tight combination of pre-training and meta-training cannot achieve both kinds of transferability.
This motivates the proposed Omni-Training framework towards data-efficient deep learning.
arXiv Detail & Related papers (2021-10-14T16:30:36Z) - Frustratingly Easy Transferability Estimation [64.42879325144439]
We propose a simple, efficient, and effective transferability measure named TransRate.
TransRate measures the transferability as the mutual information between the features of target examples extracted by a pre-trained model and labels of them.
Despite its extraordinary simplicity in 10 lines of codes, TransRate performs remarkably well in extensive evaluations on 22 pre-trained models and 16 downstream tasks.
arXiv Detail & Related papers (2021-06-17T10:27:52Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.