TransferTraj: A Vehicle Trajectory Learning Model for Region and Task Transferability
- URL: http://arxiv.org/abs/2505.12672v1
- Date: Mon, 19 May 2025 03:40:34 GMT
- Title: TransferTraj: A Vehicle Trajectory Learning Model for Region and Task Transferability
- Authors: Tonglong Wei, Yan Lin, Zeyu Zhou, Haomin Wen, Jilin Hu, Shengnan Guo, Youfang Lin, Gao Cong, Huaiyu Wan,
- Abstract summary: TransferTraj is a vehicle GPS trajectory learning model that excels in both region and task transferability.<n>For region transferability, we introduce RTTE as the main learnable module within TransferTraj.<n>For task transferability, we propose a task-transferable input-output scheme that unifies the input-output structure of different tasks.
- Score: 32.061236211032714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vehicle GPS trajectories provide valuable movement information that supports various downstream tasks and applications. A desirable trajectory learning model should be able to transfer across regions and tasks without retraining, avoiding the need to maintain multiple specialized models and subpar performance with limited training data. However, each region has its unique spatial features and contexts, which are reflected in vehicle movement patterns and difficult to generalize. Additionally, transferring across different tasks faces technical challenges due to the varying input-output structures required for each task. Existing efforts towards transferability primarily involve learning embedding vectors for trajectories, which perform poorly in region transfer and require retraining of prediction modules for task transfer. To address these challenges, we propose TransferTraj, a vehicle GPS trajectory learning model that excels in both region and task transferability. For region transferability, we introduce RTTE as the main learnable module within TransferTraj. It integrates spatial, temporal, POI, and road network modalities of trajectories to effectively manage variations in spatial context distribution across regions. It also introduces a TRIE module for incorporating relative information of spatial features and a spatial context MoE module for handling movement patterns in diverse contexts. For task transferability, we propose a task-transferable input-output scheme that unifies the input-output structure of different tasks into the masking and recovery of modalities and trajectory points. This approach allows TransferTraj to be pre-trained once and transferred to different tasks without retraining. Extensive experiments on three real-world vehicle trajectory datasets under task transfer, zero-shot, and few-shot region transfer, validating TransferTraj's effectiveness.
Related papers
- FAST: Similarity-based Knowledge Transfer for Efficient Policy Learning [57.4737157531239]
Transfer Learning offers the potential to accelerate learning by transferring knowledge across tasks.<n>It faces critical challenges such as negative transfer, domain adaptation and inefficiency in selecting solid source policies.<n>In this work we challenge the key issues in TL to improve knowledge transfer, agents performance across tasks and reduce computational costs.
arXiv Detail & Related papers (2025-07-27T22:21:53Z) - Enhancing Cross-task Transfer of Large Language Models via Activation Steering [75.41750053623298]
Cross-task in-context learning offers a direct solution for transferring knowledge across tasks.<n>We investigate whether cross-task transfer can be achieved via latent space steering without parameter updates or input expansion.<n>We propose a novel Cross-task Activation Steering Transfer framework that enables effective transfer by manipulating the model's internal activation states.
arXiv Detail & Related papers (2025-07-17T15:47:22Z) - Context-Enhanced Multi-View Trajectory Representation Learning: Bridging the Gap through Self-Supervised Models [27.316692263196277]
MVTraj is a novel multi-view modeling method for trajectory representation learning.
It integrates diverse contextual knowledge, from GPS to road network and points-of-interest to provide a more comprehensive understanding of trajectory data.
Extensive experiments on real-world datasets demonstrate that MVTraj significantly outperforms existing baselines in tasks associated with various spatial views.
arXiv Detail & Related papers (2024-10-17T03:56:12Z) - PTrajM: Efficient and Semantic-rich Trajectory Learning with Pretrained Trajectory-Mamba [22.622613591771152]
Vehicle trajectories provide crucial movement information for various real-world applications.
It is essential to develop a trajectory learning approach that efficiently extract rich semantic information, including movement and travel purposes.
We propose PTrajM, a novel method of efficient and semantic-rich vehicle trajectory learning.
arXiv Detail & Related papers (2024-08-09T07:48:51Z) - TrajFM: A Vehicle Trajectory Foundation Model for Region and Task Transferability [23.945687080195796]
A desirable trajectory learning model should transfer between different regions and tasks without retraining.
We propose TrajFM, a vehicle trajectory foundation model that excels in both region and task transferability.
Experiments on two real-world vehicle trajectory datasets under various settings demonstrate the effectiveness of TrajFM.
arXiv Detail & Related papers (2024-08-09T07:40:44Z) - UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation [34.918489559139715]
A universal vehicle trajectory model could be applied to different tasks, removing the need to maintain multiple specialized models.<n>We propose the Universal Vehicle Trajectory Model (UVTM), which can effectively adapt to different tasks without excessive retraining.<n>UVTM is pre-trained by reconstructing dense, feature-complete trajectories from sparse, feature-incomplete counterparts.
arXiv Detail & Related papers (2024-02-11T15:49:50Z) - Self-supervised Trajectory Representation Learning with Temporal
Regularities and Travel Semantics [30.9735101687326]
Trajectory Representation Learning (TRL) is a powerful tool for spatial-temporal data analysis and management.
Existing TRL works usually treat trajectories as ordinary sequence data, while some important spatial-temporal characteristics, such as temporal regularities and travel semantics, are not fully exploited.
We propose a novel Self-supervised trajectory representation learning framework with TemporAl Regularities and Travel semantics, namely START.
arXiv Detail & Related papers (2022-11-17T13:14:47Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue [70.65782786401257]
This work explores conversational task transfer by introducing FETA: a benchmark for few-sample task transfer in open-domain dialogue.
FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer.
We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs.
arXiv Detail & Related papers (2022-05-12T17:59:00Z) - Deep transfer learning for partial differential equations under
conditional shift with DeepONet [0.0]
We propose a novel TL framework for task-specific learning under conditional shift with a deep operator network (DeepONet)
Inspired by the conditional embedding operator theory, we measure the statistical distance between the source domain and the target feature domain.
We show that the proposed TL framework enables fast and efficient multi-task operator learning, despite significant differences between the source and target domains.
arXiv Detail & Related papers (2022-04-20T23:23:38Z) - Omni-Training for Data-Efficient Deep Learning [80.28715182095975]
Recent advances reveal that a properly pre-trained model endows an important property: transferability.
A tight combination of pre-training and meta-training cannot achieve both kinds of transferability.
This motivates the proposed Omni-Training framework towards data-efficient deep learning.
arXiv Detail & Related papers (2021-10-14T16:30:36Z) - Frustratingly Easy Transferability Estimation [64.42879325144439]
We propose a simple, efficient, and effective transferability measure named TransRate.
TransRate measures the transferability as the mutual information between the features of target examples extracted by a pre-trained model and labels of them.
Despite its extraordinary simplicity in 10 lines of codes, TransRate performs remarkably well in extensive evaluations on 22 pre-trained models and 16 downstream tasks.
arXiv Detail & Related papers (2021-06-17T10:27:52Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.