Text2Freq: Learning Series Patterns from Text via Frequency Domain
- URL: http://arxiv.org/abs/2411.00929v1
- Date: Fri, 01 Nov 2024 16:11:02 GMT
- Title: Text2Freq: Learning Series Patterns from Text via Frequency Domain
- Authors: Ming-Chih Lo, Ching Chang, Wen-Chih Peng,
- Abstract summary: Text2Freq is a cross-modality model that integrates text and time series data via the frequency domain.
Our experiments on paired datasets of real-world stock prices and synthetic texts show that Text2Freq achieves state-of-the-art performance.
- Score: 8.922661807801227
- License:
- Abstract: Traditional time series forecasting models mainly rely on historical numeric values to predict future outcomes.While these models have shown promising results, they often overlook the rich information available in other modalities, such as textual descriptions of special events, which can provide crucial insights into future dynamics.However, research that jointly incorporates text in time series forecasting remains relatively underexplored compared to other cross-modality work. Additionally, the modality gap between time series data and textual information poses a challenge for multimodal learning. To address this task, we propose Text2Freq, a cross-modality model that integrates text and time series data via the frequency domain. Specifically, our approach aligns textual information to the low-frequency components of time series data, establishing more effective and interpretable alignments between these two modalities. Our experiments on paired datasets of real-world stock prices and synthetic texts show that Text2Freq achieves state-of-the-art performance, with its adaptable architecture encouraging future research in this field.
Related papers
- Domain-Independent Automatic Generation of Descriptive Texts for Time-Series Data [5.264562311559751]
We propose a method to generate domain-independent descriptive texts from time-series data.
By implementing the novel backward approach, we create the Temporal Automated Captions for Observations dataset.
Experimental results demonstrate that a contrastive learning based model trained using the TACO dataset is capable of generating descriptive texts for time-series data in novel domains.
arXiv Detail & Related papers (2024-09-25T06:04:03Z) - Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding [57.62275091656578]
We refer to the complex events composed of many news articles over an extended period as Temporal Complex Event (TCE)
This paper proposes a novel approach using Large Language Models (LLMs) to systematically extract and analyze the event chain within TCE.
arXiv Detail & Related papers (2024-06-04T16:42:17Z) - Beyond Trend and Periodicity: Guiding Time Series Forecasting with Textual Cues [9.053923035530152]
This work introduces a novel Text-Guided Time Series Forecasting (TGTSF) task.
By integrating textual cues, such as channel descriptions and dynamic news, TGTSF addresses the critical limitations of traditional methods.
We propose TGForecaster, a robust baseline model that fuses textual cues and time series data using cross-attention mechanisms.
arXiv Detail & Related papers (2024-05-22T10:45:50Z) - Dataset Condensation for Time Series Classification via Dual Domain Matching [12.317728375957717]
We propose a novel framework named dataset textittextbfCondensation for textittextbfTime textittextbfSeries textittextbfClassification via Dual Domain Matching.
Our proposed framework aims to generate a condensed dataset that matches the surrogate objectives in both the time and frequency domains.
arXiv Detail & Related papers (2024-03-12T02:05:06Z) - Text2Data: Low-Resource Data Generation with Textual Control [104.38011760992637]
Natural language serves as a common and straightforward control signal for humans to interact seamlessly with machines.
We propose Text2Data, a novel approach that utilizes unlabeled data to understand the underlying data distribution through an unsupervised diffusion model.
It undergoes controllable finetuning via a novel constraint optimization-based learning objective that ensures controllability and effectively counteracts catastrophic forgetting.
arXiv Detail & Related papers (2024-02-08T03:41:39Z) - UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
Forecasting [59.11817101030137]
This research advocates for a unified model paradigm that transcends domain boundaries.
Learning an effective cross-domain model presents the following challenges.
We propose UniTime for effective cross-domain time series learning.
arXiv Detail & Related papers (2023-10-15T06:30:22Z) - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting [24.834846119163885]
We propose a novel framework, TEMPO, that can effectively learn time series representations.
TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains.
arXiv Detail & Related papers (2023-10-08T00:02:25Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - JOIST: A Joint Speech and Text Streaming Model For ASR [63.15848310748753]
We present JOIST, an algorithm to train a streaming, cascaded, encoder end-to-end (E2E) model with both speech-text paired inputs, and text-only unpaired inputs.
We find that best text representation for JOIST improves WER across a variety of search and rare-word test sets by 4-14% relative, compared to a model not trained with text.
arXiv Detail & Related papers (2022-10-13T20:59:22Z) - Data-to-text Generation with Variational Sequential Planning [74.3955521225497]
We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input.
We propose a neural model enhanced with a planning component responsible for organizing high-level information in a coherent and meaningful way.
We infer latent plans sequentially with a structured variational model, while interleaving the steps of planning and generation.
arXiv Detail & Related papers (2022-02-28T13:17:59Z) - Human-like Time Series Summaries via Trend Utility Estimation [13.560018516096754]
We propose a model to create human-like text descriptions for time series.
Our system finds patterns in time series data and ranks these patterns based on empirical observations of human behavior.
The output of our system is a natural language description of time series that attempts to match a human's summary of the same data.
arXiv Detail & Related papers (2020-01-16T06:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.