Dual-Forecaster: A Multimodal Time Series Model Integrating Descriptive and Predictive Texts
- URL: http://arxiv.org/abs/2505.01135v1
- Date: Fri, 02 May 2025 09:24:31 GMT
- Title: Dual-Forecaster: A Multimodal Time Series Model Integrating Descriptive and Predictive Texts
- Authors: Wenfa Wu, Guanyu Zhang, Zheng Tan, Yi Wang, Hongsheng Qi,
- Abstract summary: We propose Dual-Forecaster, a pioneering multimodal time series model that combines both descriptively historical textual information and predictive textual insights.<n>Our comprehensive evaluations on fifteen multimodal time series datasets demonstrate that Dual-Forecaster is a distinctly effective multimodal time series model.
- Score: 5.873261646876953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing single-modal time series models rely solely on numerical series, which suffer from the limitations imposed by insufficient information. Recent studies have revealed that multimodal models can address the core issue by integrating textual information. However, these models focus on either historical or future textual information, overlooking the unique contributions each plays in time series forecasting. Besides, these models fail to grasp the intricate relationships between textual and time series data, constrained by their moderate capacity for multimodal comprehension. To tackle these challenges, we propose Dual-Forecaster, a pioneering multimodal time series model that combines both descriptively historical textual information and predictive textual insights, leveraging advanced multimodal comprehension capability empowered by three well-designed cross-modality alignment techniques. Our comprehensive evaluations on fifteen multimodal time series datasets demonstrate that Dual-Forecaster is a distinctly effective multimodal time series model that outperforms or is comparable to other state-of-the-art models, highlighting the superiority of integrating textual information for time series forecasting. This work opens new avenues in the integration of textual information with numerical time series data for multimodal time series analysis.
Related papers
- Does Multimodality Lead to Better Time Series Forecasting? [84.74978289870155]
It remains unclear whether and under what conditions such multimodal integration consistently yields gains.<n>We evaluate two popular multimodal forecasting paradigms: aligning-based methods, which align time series and text representations; and prompting-based methods, which directly prompt large language models for forecasting.<n>Our findings highlight that on the modeling side, incorporating text information is most helpful given (1) high-capacity text models, (2) comparatively weaker time series models, and (3) appropriate aligning strategies.
arXiv Detail & Related papers (2025-06-20T23:55:56Z) - ChronoSteer: Bridging Large Language Model and Time Series Foundation Model via Synthetic Data [22.81326423408988]
We introduce ChronoSteer, a multimodal TSFM that can be steered through textual revision instructions.<n>To mitigate the shortage of cross-modal instruction-series paired data, we devise a two-stage training strategy based on synthetic data.<n> ChronoSteer achieves a 25.7% improvement in prediction accuracy compared to the unimodal backbone and a 22.5% gain over the previous state-of-the-art multimodal method.
arXiv Detail & Related papers (2025-05-15T08:37:23Z) - TimesBERT: A BERT-Style Foundation Model for Time Series Understanding [72.64824086839631]
GPT-style models have been positioned as foundation models for time series forecasting.<n>BERT-style architecture has not been fully unlocked for time series understanding.<n>We design TimesBERT to learn generic representations of time series.<n>Our model is pre-trained on 260 billion time points across diverse domains.
arXiv Detail & Related papers (2025-02-28T17:14:44Z) - Language in the Flow of Time: Time-Series-Paired Texts Weaved into a Unified Temporal Narrative [65.84249211767921]
Texts as Time Series (TaTS) considers the time-series-paired texts to be auxiliary variables of the time series.<n>TaTS can be plugged into any existing numerical-only time series models and enable them to handle time series data with paired texts effectively.
arXiv Detail & Related papers (2025-02-13T03:43:27Z) - TempoGPT: Enhancing Time Series Reasoning via Quantizing Embedding [13.996105878417204]
We propose a multi-modal time series data construction approach and a multi-modal time series language model (TLM), TempoGPT.<n>We construct multi-modal data for complex reasoning tasks by analyzing the variable-system relationships within a white-box system.<n>Extensive experiments demonstrate that TempoGPT accurately perceives temporal information, logically infers conclusions, and achieves state-of-the-art in the constructed complex time series reasoning tasks.
arXiv Detail & Related papers (2025-01-13T13:47:05Z) - Unveiling the Potential of Text in High-Dimensional Time Series Forecasting [12.707274099874384]
We propose a novel framework that integrates time series models with Large Language Models.<n>Inspired by multimodal models, our method combines time series and textual data in the dual-tower structure.<n>Experiments demonstrate that incorporating text enhances high-dimensional time series forecasting performance.
arXiv Detail & Related papers (2025-01-13T04:10:45Z) - ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data [26.300515935897415]
ChatTime is a unified framework for time series and text processing.<n>As an out-of-the-box multimodal time series foundation model, ChatTime provides zero-shot forecasting capability.<n>We design a series of experiments to verify the superior performance of ChatTime across multiple tasks and scenarios.
arXiv Detail & Related papers (2024-12-16T02:04:06Z) - Text2Freq: Learning Series Patterns from Text via Frequency Domain [8.922661807801227]
Text2Freq is a cross-modality model that integrates text and time series data via the frequency domain.
Our experiments on paired datasets of real-world stock prices and synthetic texts show that Text2Freq achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-01T16:11:02Z) - Deep Time Series Models: A Comprehensive Survey and Benchmark [74.28364194333447]
Time series data is of great significance in real-world scenarios.
Recent years have witnessed remarkable breakthroughs in the time series community.
We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.