Towards A Foundation Model For Trajectory Intelligence
- URL: http://arxiv.org/abs/2312.00076v1
- Date: Thu, 30 Nov 2023 00:34:09 GMT
- Title: Towards A Foundation Model For Trajectory Intelligence
- Authors: Alameen Najjar
- Abstract summary: We present the results of training a large trajectory model using real-world user check-in data.
Our approach follows a pre-train and fine-tune paradigm, where a base model is pre-trained via masked trajectory modeling.
Our empirical analysis utilizes a comprehensive dataset of over 2 billion check-ins generated by more than 6 million users.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present the results of training a large trajectory model using real-world
user check-in data. Our approach follows a pre-train and fine-tune paradigm,
where a base model is pre-trained via masked trajectory modeling and then
adapted through fine-tuning for various downstream tasks. To address challenges
posed by noisy data and large spatial vocabularies, we propose a novel spatial
tokenization block. Our empirical analysis utilizes a comprehensive dataset of
over 2 billion check-ins generated by more than 6 million users. Through
fine-tuning on 3 downstream tasks we demonstrate that our base model has
effectively learned valuable underlying patterns in raw data, enabling its
application in meaningful trajectory intelligence tasks. Despite some
limitations, we believe this work represents an important step forward in the
realization of a foundation model for trajectory intelligence.
Related papers
- RealTraj: Towards Real-World Pedestrian Trajectory Forecasting [10.332817296500533]
We propose a novel framework, RealTraj, that enhances the real-world applicability of trajectory forecasting.
We present Det2TrajFormer, a trajectory forecasting model that remains invariant in tracking noise by using past detections as inputs.
Unlike previous trajectory forecasting methods, our approach fine-tunes the model using only ground-truth detections, significantly reducing the need for costly person ID annotations.
arXiv Detail & Related papers (2024-11-26T12:35:26Z) - Pre-Trained Model Recommendation for Downstream Fine-tuning [22.343011779348682]
Model selection aims to rank off-the-shelf pre-trained models and select the most suitable one for the new target task.
Existing model selection techniques are often constrained in their scope and tend to overlook the nuanced relationships between models and tasks.
We present a pragmatic framework textbfFennec, delving into a diverse, large-scale model repository.
arXiv Detail & Related papers (2024-03-11T02:24:32Z) - A Billion-scale Foundation Model for Remote Sensing Images [5.065947993017157]
Three key factors in pretraining foundation models are the pretraining method, the size of the pretraining dataset, and the number of model parameters.
This paper examines the effect of increasing the number of model parameters on the performance of foundation models in downstream tasks.
To the best of our knowledge, this is the first billion-scale foundation model in the remote sensing field.
arXiv Detail & Related papers (2023-04-11T13:33:45Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - TRAK: Attributing Model Behavior at Scale [79.56020040993947]
We present TRAK (Tracing with Randomly-trained After Kernel), a data attribution method that is both effective and computationally tractable for large-scale, differenti models.
arXiv Detail & Related papers (2023-03-24T17:56:22Z) - Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models [89.44031286278347]
We propose a Hub-Pathway framework to enable knowledge transfer from a model hub.
The proposed framework can be trained end-to-end with the target task-specific loss.
Experiment results on computer vision and reinforcement learning tasks demonstrate that the framework achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-06-08T08:00:12Z) - PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map [58.53373202647576]
We propose PreTraM, a self-supervised pre-training scheme for trajectory forecasting.
It consists of two parts: 1) Trajectory-Map Contrastive Learning, where we project trajectories and maps to a shared embedding space with cross-modal contrastive learning, and 2) Map Contrastive Learning, where we enhance map representation with contrastive learning on large quantities of HD-maps.
On top of popular baselines such as AgentFormer and Trajectron++, PreTraM boosts their performance by 5.5% and 6.9% relatively in FDE-10 on the challenging nuScenes dataset.
arXiv Detail & Related papers (2022-04-21T23:01:21Z) - Transforming Model Prediction for Tracking [109.08417327309937]
Transformers capture global relations with little inductive bias, allowing it to learn the prediction of more powerful target models.
We train the proposed tracker end-to-end and validate its performance by conducting comprehensive experiments on multiple tracking datasets.
Our tracker sets a new state of the art on three benchmarks, achieving an AUC of 68.5% on the challenging LaSOT dataset.
arXiv Detail & Related papers (2022-03-21T17:59:40Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Mobility Inference on Long-Tailed Sparse Trajectory [2.4444287331956898]
We propose a single trajectory inference algorithm that utilizes a generic long-tailed sparsity pattern in the large-scale trajectory data.
The algorithm guarantees a 100% precision in the stay/travel inference with a provable lower-bound in the recall.
Evaluations with three trajectory data sets of 40 million urban users validate the performance guarantees of the proposed inference algorithm.
arXiv Detail & Related papers (2020-01-21T16:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.