Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution
- URL: http://arxiv.org/abs/2410.04047v2
- Date: Tue, 8 Oct 2024 16:28:23 GMT
- Title: Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution
- Authors: Wen Ye, Yizhou Zhang, Wei Yang, Lumingyuan Tang, Defu Cao, Jie Cai, Yan Liu,
- Abstract summary: We introduce Compositional Time Series Reasoning, a new task of handling intricate multistep reasoning tasks from time series data.
Specifically, this new task focuses on various question instances requiring structural and compositional reasoning abilities on time series data.
We develop TS-Reasoner, a program-aided approach that utilizes large language model (LLM) to decompose a complex task into steps of programs.
- Score: 19.64976935450366
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In recent decades, there has been substantial advances in time series models and benchmarks across various individual tasks, such as time series forecasting, classification, and anomaly detection. Meanwhile, compositional reasoning in time series is prevalent in real-world applications (e.g., decision-making and compositional question answering) and is in great demand. Unlike simple tasks that primarily focus on predictive accuracy, compositional reasoning emphasizes the synthesis of diverse information from both time series data and various domain knowledge, making it distinct and extremely more challenging. In this paper, we introduce Compositional Time Series Reasoning, a new task of handling intricate multistep reasoning tasks from time series data. Specifically, this new task focuses on various question instances requiring structural and compositional reasoning abilities on time series data, such as decision-making and compositional question answering. As an initial attempt to tackle this novel task, we developed TS-Reasoner, a program-aided approach that utilizes large language model (LLM) to decompose a complex task into steps of programs that leverage existing time series models and numerical subroutines. Unlike existing reasoning work which only calls off-the-shelf modules, TS-Reasoner allows for the creation of custom modules and provides greater flexibility to incorporate domain knowledge as well as user-specified constraints. We demonstrate the effectiveness of our method through a comprehensive set of experiments. These promising results indicate potential opportunities in the new task of time series reasoning and highlight the need for further research.
Related papers
- Agentic Retrieval-Augmented Generation for Time Series Analysis [0.0]
We propose a novel agentic Retrieval-Augmented Generation framework for time series analysis.
Our proposed modular multi-agent RAG approach offers flexibility and achieves more state-of-the-art performance across major time series tasks.
arXiv Detail & Related papers (2024-08-18T11:47:55Z) - Deep Time Series Models: A Comprehensive Survey and Benchmark [74.28364194333447]
Time series data is of great significance in real-world scenarios.
Recent years have witnessed remarkable breakthroughs in the time series community.
We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - UniCL: A Universal Contrastive Learning Framework for Large Time Series Models [18.005358506435847]
Time-series analysis plays a pivotal role across a range of critical applications, from finance to healthcare.
Traditional supervised learning methods first annotate extensive labels for time-series data in each task.
This paper introduces UniCL, a universal and scalable contrastive learning framework designed for pretraining time-series foundation models.
arXiv Detail & Related papers (2024-05-17T07:47:11Z) - A Survey of Time Series Foundation Models: Generalizing Time Series Representation with Large Language Model [33.17908422599714]
Large language foundation models have unveiled their capabilities for cross-task transferability, zero-shot/few-shot learning, and decision-making explainability.
There are two main research lines, namely pre-training foundation models from scratch for time series and adapting large language foundation models for time series.
This survey offers a 3E analytical framework for comprehensive examination of related research.
arXiv Detail & Related papers (2024-05-03T03:12:55Z) - Position: What Can Large Language Models Tell Us about Time Series Analysis [69.70906014827547]
We argue that current large language models (LLMs) have the potential to revolutionize time series analysis.
Such advancement could unlock a wide range of possibilities, including time series modality switching and question answering.
arXiv Detail & Related papers (2024-02-05T04:17:49Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Feature Programming for Multivariate Time Series Prediction [7.0220697993232]
We introduce the concept of programmable feature engineering for time series modeling.
We propose a feature programming framework that generates large amounts of predictive features for noisy time series.
arXiv Detail & Related papers (2023-06-09T20:46:55Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Multi-Task Time Series Forecasting With Shared Attention [15.294939035413217]
We propose two self-attention based sharing schemes for multi-task time series forecasting.
Our proposed architectures can not only outperform the state-of-the-art single-task forecasting baselines but also outperform the RNN-based multi-task forecasting method.
arXiv Detail & Related papers (2021-01-24T04:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.