Learning to Factorize and Adapt: A Versatile Approach Toward Universal Spatio-Temporal Foundation Models
- URL: http://arxiv.org/abs/2601.12083v1
- Date: Sat, 17 Jan 2026 15:20:08 GMT
- Title: Learning to Factorize and Adapt: A Versatile Approach Toward Universal Spatio-Temporal Foundation Models
- Authors: Siru Zhong, Junjie Qiu, Yangyu Wu, Yiqiu Liu, Yuanpeng He, Zhongwen Rao, Bin Yang, Chenjuan Guo, Hao Xu, Yuxuan Liang,
- Abstract summary: We present FactoST-v2, an enhanced factorized framework for universal temporal learning.<n>We show that FactoST-v2 achieves state-of-the-art accuracy with linear efficiency.<n>This factorized paradigm offers a practical, scalable path toward truly universal STFMs.
- Score: 42.152122602443164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatio-Temporal (ST) Foundation Models (STFMs) promise cross-dataset generalization, yet joint ST pretraining is computationally expensive and grapples with the heterogeneity of domain-specific spatial patterns. Substantially extending our preliminary conference version, we present FactoST-v2, an enhanced factorized framework redesigned for full weight transfer and arbitrary-length generalization. FactoST-v2 decouples universal temporal learning from domain-specific spatial adaptation. The first stage pretrains a minimalist encoder-only backbone using randomized sequence masking to capture invariant temporal dynamics, enabling probabilistic quantile prediction across variable horizons. The second stage employs a streamlined adapter to rapidly inject spatial awareness via meta adaptive learning and prompting. Comprehensive evaluations across diverse domains demonstrate that FactoST-v2 achieves state-of-the-art accuracy with linear efficiency - significantly outperforming existing foundation models in zero-shot and few-shot scenarios while rivaling domain-specific expert baselines. This factorized paradigm offers a practical, scalable path toward truly universal STFMs. Code is available at https://github.com/CityMind-Lab/FactoST.
Related papers
- Cross-Domain Transfer with Self-Supervised Spectral-Spatial Modeling for Hyperspectral Image Classification [5.784164305429653]
This paper proposes a self-supervised cross-domain transfer framework.<n>It learns transferable spectral-spatial joint representations without source labels.<n> Experimental results demonstrate stable classification performance and strong cross-domain adaptability.
arXiv Detail & Related papers (2026-01-26T02:52:35Z) - Enhancing Semantic Segmentation with Continual Self-Supervised Pre-training [11.897717409259492]
Self-supervised learning (SSL) has emerged as a central paradigm for training foundation models.<n>We propose GLARE, a novel continual self-supervised pre-training task designed to enhance downstream segmentation performance.
arXiv Detail & Related papers (2025-09-22T14:11:02Z) - Feature-Space Planes Searcher: A Universal Domain Adaptation Framework for Interpretability and Computational Efficiency [7.889121135601528]
Current unsupervised domain adaptation methods rely on fine-tuning feature extractors.<n>We propose Feature-space Planes Searcher (FPS) as a novel domain adaptation framework.<n>We show that FPS achieves competitive or superior performance to state-of-the-art methods.
arXiv Detail & Related papers (2025-08-26T05:39:21Z) - Spatial-Temporal-Spectral Unified Modeling for Remote Sensing Dense Prediction [20.1863553357121]
Current deep learning architectures for remote sensing are fundamentally rigid.<n>We introduce the Spatial-Temporal-Spectral Unified Network (STSUN) for unified modeling.<n> STSUN can adapt to input and output data with arbitrary spatial sizes, temporal lengths, and spectral bands.<n>It unifies various dense prediction tasks and diverse semantic class predictions.
arXiv Detail & Related papers (2025-05-18T07:39:17Z) - UniSTD: Towards Unified Spatio-Temporal Learning across Diverse Disciplines [64.84631333071728]
We introduce bfUnistage, a unified Transformer-based framework fortemporal modeling.<n>Our work demonstrates that a task-specific vision-text can build a generalizable model fortemporal learning.<n>We also introduce a temporal module to incorporate temporal dynamics explicitly.
arXiv Detail & Related papers (2025-03-26T17:33:23Z) - UTSD: Unified Time Series Diffusion Model [13.555837288440946]
A Unified Time Series Diffusion model is established for the first time to model the multi-domain probability distribution.<n>We conduct extensive experiments on mainstream benchmarks, and the pre-trained UTSD outperforms existing foundation models on all data domains.
arXiv Detail & Related papers (2024-12-04T06:42:55Z) - UniMix: Towards Domain Adaptive and Generalizable LiDAR Semantic Segmentation in Adverse Weather [55.95708988160047]
LiDAR semantic segmentation (LSS) is a critical task in autonomous driving.
Prior LSS methods are investigated and evaluated on datasets within the same domain in clear weather.
We propose UniMix, a universal method that enhances the adaptability and generalizability of LSS models.
arXiv Detail & Related papers (2024-04-08T02:02:15Z) - Test-Time Domain Generalization for Face Anti-Spoofing [60.94384914275116]
Face Anti-Spoofing (FAS) is pivotal in safeguarding facial recognition systems against presentation attacks.
We introduce a novel Test-Time Domain Generalization framework for FAS, which leverages the testing data to boost the model's generalizability.
Our method, consisting of Test-Time Style Projection (TTSP) and Diverse Style Shifts Simulation (DSSS), effectively projects the unseen data to the seen domain space.
arXiv Detail & Related papers (2024-03-28T11:50:23Z) - Dual Adaptive Representation Alignment for Cross-domain Few-shot
Learning [58.837146720228226]
Few-shot learning aims to recognize novel queries with limited support samples by learning from base knowledge.
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains.
We propose to address the cross-domain few-shot learning problem where only extremely few samples are available in target domains.
arXiv Detail & Related papers (2023-06-18T09:52:16Z) - Fourier Test-time Adaptation with Multi-level Consistency for Robust
Classification [10.291631977766672]
We propose a novel approach called Fourier Test-time Adaptation (FTTA) to integrate input and model tuning.
FTTA builds a reliable multi-level consistency measurement of paired inputs for achieving self-supervised of prediction.
It was extensively validated on three large classification datasets with different modalities and organs.
arXiv Detail & Related papers (2023-06-05T02:29:38Z) - HSVA: Hierarchical Semantic-Visual Adaptation for Zero-Shot Learning [74.76431541169342]
Zero-shot learning (ZSL) tackles the unseen class recognition problem, transferring semantic knowledge from seen classes to unseen ones.
We propose a novel hierarchical semantic-visual adaptation (HSVA) framework to align semantic and visual domains.
Experiments on four benchmark datasets demonstrate HSVA achieves superior performance on both conventional and generalized ZSL.
arXiv Detail & Related papers (2021-09-30T14:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.