A Unified Shape-Aware Foundation Model for Time Series Classification
- URL: http://arxiv.org/abs/2601.06429v1
- Date: Sat, 10 Jan 2026 05:04:39 GMT
- Title: A Unified Shape-Aware Foundation Model for Time Series Classification
- Authors: Zhen Liu, Yucheng Wang, Boyuan Li, Junhao Zheng, Emadeldeen Eldele, Min Wu, Qianli Ma,
- Abstract summary: We propose UniShape, a unified shape-aware foundation model designed for time series classification.<n>UniShape incorporates a shape-aware adapter that adaptively aggregates multiscale discriminative subsequences (shapes) into class tokens.<n>A prototype-based pretraining module is introduced to jointly learn instance- and shape-level representations.
- Score: 26.26217792622276
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Foundation models pre-trained on large-scale source datasets are reshaping the traditional training paradigm for time series classification. However, existing time series foundation models primarily focus on forecasting tasks and often overlook classification-specific challenges, such as modeling interpretable shapelets that capture class-discriminative temporal features. To bridge this gap, we propose UniShape, a unified shape-aware foundation model designed for time series classification. UniShape incorporates a shape-aware adapter that adaptively aggregates multiscale discriminative subsequences (shapes) into class tokens, effectively selecting the most relevant subsequence scales to enhance model interpretability. Meanwhile, a prototype-based pretraining module is introduced to jointly learn instance- and shape-level representations, enabling the capture of transferable shape patterns. Pre-trained on a large-scale multi-domain time series dataset comprising 1.89 million samples, UniShape exhibits superior generalization across diverse target domains. Experiments on 128 UCR datasets and 30 additional time series datasets demonstrate that UniShape achieves state-of-the-art classification performance, with interpretability and ablation analyses further validating its effectiveness.
Related papers
- ShapeX: Shapelet-Driven Post Hoc Explanations for Time Series Classification Models [111.27770361128056]
We present ShapeX, an innovative framework that segments time series into meaningful shapelet-driven segments.<n>At the core of ShapeX lies the Shapelet Describe-and-Detect framework, which effectively learns a diverse set of shapelets essential for classification.
arXiv Detail & Related papers (2025-10-23T00:01:40Z) - Time Series Generation Under Data Scarcity: A Unified Generative Modeling Approach [7.631288333466648]
We conduct the first large-scale study evaluating leading generative models in data-scarce settings.<n>We propose a unified diffusion-based generative framework that can synthesize high-fidelity time series using just a few examples.
arXiv Detail & Related papers (2025-05-26T18:39:04Z) - VSFormer: Value and Shape-Aware Transformer with Prior-Enhanced Self-Attention for Multivariate Time Series Classification [47.92529531621406]
We propose a novel method, VSFormer, that incorporates both discriminative patterns (shape) and numerical information (value)<n>In addition, we extract class-specific prior information derived from supervised information to enrich the positional encoding.<n>Extensive experiments on all 30 UEA archived datasets demonstrate the superior performance of our method compared to SOTA models.
arXiv Detail & Related papers (2024-12-21T07:31:22Z) - Measuring Pre-training Data Quality without Labels for Time Series Foundation Models [10.64362760848387]
We introduce contrastive accuracy, a new measure to evaluate the quality of the representation space learned by the foundation model.<n>Our experiments reveal the positive correlation between the proposed measure and the accuracy of the model on a collection of downstream tasks.
arXiv Detail & Related papers (2024-12-09T10:38:30Z) - UniCL: A Universal Contrastive Learning Framework for Large Time Series Models [18.005358506435847]
Time-series analysis plays a pivotal role across a range of critical applications, from finance to healthcare.
Traditional supervised learning methods first annotate extensive labels for time-series data in each task.
This paper introduces UniCL, a universal and scalable contrastive learning framework designed for pretraining time-series foundation models.
arXiv Detail & Related papers (2024-05-17T07:47:11Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Large Pre-trained time series models for cross-domain Time series analysis tasks [20.228846068418765]
Large Pre-trained Time-series Models (LPTM) is a novel method of adaptive segmentation that automatically identifies optimal dataset-specific segmentation strategy during pre-training.<n> LPTM achieves superior forecasting and time-series classification results taking up to 40% less data and 50% less training time compared to state-of-art baselines.
arXiv Detail & Related papers (2023-11-19T20:16:16Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.