MoE-TransMov: A Transformer-based Model for Next POI Prediction in Familiar & Unfamiliar Movements
- URL: http://arxiv.org/abs/2512.17985v1
- Date: Fri, 19 Dec 2025 15:03:49 GMT
- Title: MoE-TransMov: A Transformer-based Model for Next POI Prediction in Familiar & Unfamiliar Movements
- Authors: Ruichen Tan, Jiawei Xue, Kota Tsubouchi, Takahiro Yabe, Satish V. Ukkusuri,
- Abstract summary: MoE-TransMov is a Transformer-based model with a Mixture-of-Experts (MoE) architecture.<n>We classify movements into familiar and unfamiliar categories and develop a specialized expert network to improve prediction accuracy.<n>Our approach integrates self-attention mechanisms and adaptive gating networks to dynamically select the most relevant expert models for different mobility contexts.
- Score: 6.942569415880399
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate prediction of the next point of interest (POI) within human mobility trajectories is essential for location-based services, as it enables more timely and personalized recommendations. In particular, with the rise of these approaches, studies have shown that users exhibit different POI choices in their familiar and unfamiliar areas, highlighting the importance of incorporating user familiarity into predictive models. However, existing methods often fail to distinguish between the movements of users in familiar and unfamiliar regions. To address this, we propose MoE-TransMov, a Transformer-based model with a Transformer model with a Mixture-of-Experts (MoE) architecture designed to use one framework to capture distinct mobility patterns across different moving contexts without requiring separate training for certain data. Using user-check-in data, we classify movements into familiar and unfamiliar categories and develop a specialized expert network to improve prediction accuracy. Our approach integrates self-attention mechanisms and adaptive gating networks to dynamically select the most relevant expert models for different mobility contexts. Experiments on two real-world datasets, including the widely used but small open-source Foursquare NYC dataset and the large-scale Kyoto dataset collected with LY Corporation (Yahoo Japan Corporation), show that MoE-TransMov outperforms state-of-the-art baselines with notable improvements in Top-1, Top-5, Top-10 accuracy, and mean reciprocal rank (MRR). Given the results, we find that by using this approach, we can efficiently improve mobility predictions under different moving contexts, thereby enhancing the personalization of recommendation systems and advancing various urban applications.
Related papers
- Mixture-of-Experts for Personalized and Semantic-Aware Next Location Prediction [20.726107072683575]
NextLocMoE is a novel framework built upon large language models (LLMs) and structured around a dual-level Mixture-of-Experts (MoE) design.<n>Our architecture comprises two specialized modules: a Location Semantics MoE that operates at the embedding level to encode rich functional semantics of locations, and a Personalized MoE embedded within the Transformer backbone to dynamically adapt to individual user mobility patterns.
arXiv Detail & Related papers (2025-05-30T13:45:19Z) - MoveGPT: Scaling Mobility Foundation Models with Spatially-Aware Mixture of Experts [17.430772832222793]
MoveGPT is a large-scale foundation model specifically architected to overcome barriers to scaling.<n>It establishes a new state-of-the-art across a wide range of downstream tasks, achieving performance gains of up to 35% on average.<n>It also demonstrates strong generalization capabilities to unseen cities.
arXiv Detail & Related papers (2025-05-24T12:17:47Z) - FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors [50.131271229165165]
Federated Learning (FL) has emerged as a promising framework for distributed machine learning.<n>Data heterogeneity resulting from differences across user behaviors, preferences, and device characteristics poses a significant challenge for federated learning.<n>We propose Adaptive Weight Aggregation (FedAWA), a novel method that adaptively adjusts aggregation weights based on client vectors during the learning process.
arXiv Detail & Related papers (2025-03-20T04:49:40Z) - Unified Human Localization and Trajectory Prediction with Monocular Vision [64.19384064365431]
MonoTransmotion is a Transformer-based framework that uses only a monocular camera to jointly solve localization and prediction tasks.<n>We show that by jointly training both tasks with our unified framework, our method is more robust in real-world scenarios made of noisy inputs.
arXiv Detail & Related papers (2025-03-05T14:18:39Z) - Multi-Transmotion: Pre-trained Model for Human Motion Prediction [68.87010221355223]
Multi-Transmotion is an innovative transformer-based model designed for cross-modality pre-training.
Our methodology demonstrates competitive performance across various datasets on several downstream tasks.
arXiv Detail & Related papers (2024-11-04T23:15:21Z) - UoMo: A Universal Model of Mobile Traffic Forecasting for Wireless Network Optimization [5.562190792475747]
We propose an innovative Foundation model for Mobile traffic forecasting (FoMo)<n>FoMo aims to handle diverse forecasting tasks of short/long-term predictions and distribution generation across multiple cities to support network planning and optimization.<n>FoMo combines diffusion models and transformers, where various universality masks are proposed to enable FoMo to learn intrinsic features of different tasks.
arXiv Detail & Related papers (2024-10-20T07:32:16Z) - Knowledge-aware Graph Transformer for Pedestrian Trajectory Prediction [15.454206825258169]
Predicting pedestrian motion trajectories is crucial for path planning and motion control of autonomous vehicles.
Recent deep learning-based prediction approaches mainly utilize information like trajectory history and interactions between pedestrians.
This paper proposes a graph transformer structure to improve prediction performance.
arXiv Detail & Related papers (2024-01-10T01:50:29Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Pre-trained Recommender Systems: A Causal Debiasing Perspective [19.712997823535066]
We develop a generic recommender that captures universal interaction patterns by training on generic user-item interaction data extracted from different domains.
Our empirical studies show that the proposed model could significantly improve the recommendation performance in zero- and few-shot learning settings.
arXiv Detail & Related papers (2023-10-30T03:37:32Z) - Context-aware multi-head self-attentional neural network model for next
location prediction [19.640761373993417]
We utilize a multi-head self-attentional (A) neural network that learns location patterns from historical location visits.
We demonstrate that proposed the model outperforms other state-of-the-art prediction models.
We believe that the proposed model is vital for context-aware mobility prediction.
arXiv Detail & Related papers (2022-12-04T23:40:14Z) - Self-supervised Graph-based Point-of-interest Recommendation [66.58064122520747]
Next Point-of-Interest (POI) recommendation has become a prominent component in location-based e-commerce.
We propose a Self-supervised Graph-enhanced POI Recommender (S2GRec) for next POI recommendation.
In particular, we devise a novel Graph-enhanced Self-attentive layer to incorporate the collaborative signals from both global transition graph and local trajectory graphs.
arXiv Detail & Related papers (2022-10-22T17:29:34Z) - Motion Transformer with Global Intention Localization and Local Movement
Refinement [103.75625476231401]
Motion TRansformer (MTR) models motion prediction as the joint optimization of global intention localization and local movement refinement.
MTR achieves state-of-the-art performance on both the marginal and joint motion prediction challenges.
arXiv Detail & Related papers (2022-09-27T16:23:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.