Multi-scale hypergraph meets LLMs: Aligning large language models for time series analysis
- URL: http://arxiv.org/abs/2602.04369v1
- Date: Wed, 04 Feb 2026 09:47:00 GMT
- Title: Multi-scale hypergraph meets LLMs: Aligning large language models for time series analysis
- Authors: Zongjiang Shang, Dongliang Cui, Binqing Wu, Ling Chen,
- Abstract summary: We propose MSH-LLM, a Multi-Scale Hypergraph method that aligns Large Language Models for time series analysis.<n> Specifically, a hyperedging mechanism is designed to enhance the multi-scale semantic information of time series semantic space.<n>A mixture of prompts (MoP) mechanism is introduced to provide contextual information and enhance the ability of LLMs to understand the multi-scale temporal patterns of time series.
- Score: 14.045113722315579
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been great success in leveraging pre-trained large language models (LLMs) for time series analysis. The core idea lies in effectively aligning the modality between natural language and time series. However, the multi-scale structures of natural language and time series have not been fully considered, resulting in insufficient utilization of LLMs capabilities. To this end, we propose MSH-LLM, a Multi-Scale Hypergraph method that aligns Large Language Models for time series analysis. Specifically, a hyperedging mechanism is designed to enhance the multi-scale semantic information of time series semantic space. Then, a cross-modality alignment (CMA) module is introduced to align the modality between natural language and time series at different scales. In addition, a mixture of prompts (MoP) mechanism is introduced to provide contextual information and enhance the ability of LLMs to understand the multi-scale temporal patterns of time series. Experimental results on 27 real-world datasets across 5 different applications demonstrate that MSH-LLM achieves the state-of-the-art results.
Related papers
- SciTS: Scientific Time Series Understanding and Generation with LLMs [53.35994674187729]
We introduce SciTS, a benchmark spanning 12 scientific domains and 43 tasks.<n>We benchmark 17 models, including text-only LLMs, multimodal LLMs, and unified time series models.<n>We then introduce Time Omni, a framework that equips LLMs with the ability to understand and generate time series.
arXiv Detail & Related papers (2025-09-26T09:25:16Z) - Adapting LLMs to Time Series Forecasting via Temporal Heterogeneity Modeling and Semantic Alignment [32.41581846555808]
Large Language Models (LLMs) have recently demonstrated impressive capabilities in natural language processing.<n>We propose TALON, a unified framework that enhances LLM-based forecasting by modeling temporal and enforcing semantic alignment.<n>Experiments on seven real-world benchmarks demonstrate that TALON achieves superior performance across all datasets.
arXiv Detail & Related papers (2025-08-10T06:06:19Z) - LLM-PS: Empowering Large Language Models for Time Series Forecasting with Temporal Patterns and Semantics [56.99021951927683]
Time Series Forecasting (TSF) is critical in many real-world domains like financial planning and health monitoring.<n>Existing Large Language Models (LLMs) usually perform suboptimally because they neglect the inherent characteristics of time series data.<n>We propose LLM-PS to empower the LLM for TSF by learning the fundamental textitPatterns and meaningful textitSemantics from time series data.
arXiv Detail & Related papers (2025-03-12T11:45:11Z) - TimesBERT: A BERT-Style Foundation Model for Time Series Understanding [72.64824086839631]
GPT-style models have been positioned as foundation models for time series forecasting.<n>BERT-style architecture has not been fully unlocked for time series understanding.<n>We design TimesBERT to learn generic representations of time series.<n>Our model is pre-trained on 260 billion time points across diverse domains.
arXiv Detail & Related papers (2025-02-28T17:14:44Z) - Adapting Large Language Models for Time Series Modeling via a Novel Parameter-efficient Adaptation Method [9.412920379798928]
Time series modeling holds significant importance in many real-world applications.<n>We propose the Time-LlaMA framework to align the time series and natural language modalities.<n>We show that our proposed method achieves the state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2025-02-19T13:52:26Z) - Language in the Flow of Time: Time-Series-Paired Texts Weaved into a Unified Temporal Narrative [65.84249211767921]
Texts as Time Series (TaTS) can be plugged into any existing numerical-only time series models.<n>We show that TaTS can enhance predictive performance without modifying model architectures.
arXiv Detail & Related papers (2025-02-13T03:43:27Z) - Empowering Time Series Analysis with Large Language Models: A Survey [24.202539098675953]
We provide a systematic overview of methods that leverage large language models for time series analysis.
Specifically, we first state the challenges and motivations of applying language models in the context of time series.
Next, we categorize existing methods into different groups (i.e., direct query, tokenization, prompt design, fine-tune, and model integration) and highlight the key ideas within each group.
arXiv Detail & Related papers (2024-02-05T16:46:35Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
AutoTimes projects time series into the embedding space of language tokens and autoregressively generates future predictions with arbitrary lengths.
We formulate time series as prompts, extending the context for prediction beyond the lookback window.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over $5times$ training/inference speedup.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.