Evaluating Large Language Models on Time Series Feature Understanding: A Comprehensive Taxonomy and Benchmark
- URL: http://arxiv.org/abs/2404.16563v1
- Date: Thu, 25 Apr 2024 12:24:37 GMT
- Title: Evaluating Large Language Models on Time Series Feature Understanding: A Comprehensive Taxonomy and Benchmark
- Authors: Elizabeth Fons, Rachneet Kaur, Soham Palande, Zhen Zeng, Svitlana Vyetrenko, Tucker Balch,
- Abstract summary: Large Language Models (LLMs) offer the potential for automatic time series analysis and reporting.
We introduce a comprehensive taxonomy of time series features, a critical framework that delineates various characteristics inherent in time series data.
This dataset acts as a solid foundation for assessing the proficiency of LLMs in comprehending time series.
- Score: 6.922105778325232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) offer the potential for automatic time series analysis and reporting, which is a critical task across many domains, spanning healthcare, finance, climate, energy, and many more. In this paper, we propose a framework for rigorously evaluating the capabilities of LLMs on time series understanding, encompassing both univariate and multivariate forms. We introduce a comprehensive taxonomy of time series features, a critical framework that delineates various characteristics inherent in time series data. Leveraging this taxonomy, we have systematically designed and synthesized a diverse dataset of time series, embodying the different outlined features. This dataset acts as a solid foundation for assessing the proficiency of LLMs in comprehending time series. Our experiments shed light on the strengths and limitations of state-of-the-art LLMs in time series understanding, revealing which features these models readily comprehend effectively and where they falter. In addition, we uncover the sensitivity of LLMs to factors including the formatting of the data, the position of points queried within a series and the overall time series length.
Related papers
- Deep Time Series Models: A Comprehensive Survey and Benchmark [74.28364194333447]
Time series data is of great significance in real-world scenarios.
Recent years have witnessed remarkable breakthroughs in the time series community.
We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - Empowering Time Series Analysis with Large Language Models: A Survey [24.202539098675953]
We provide a systematic overview of methods that leverage large language models for time series analysis.
Specifically, we first state the challenges and motivations of applying language models in the context of time series.
Next, we categorize existing methods into different groups (i.e., direct query, tokenization, prompt design, fine-tune, and model integration) and highlight the key ideas within each group.
arXiv Detail & Related papers (2024-02-05T16:46:35Z) - Position: What Can Large Language Models Tell Us about Time Series Analysis [69.70906014827547]
We argue that current large language models (LLMs) have the potential to revolutionize time series analysis.
Such advancement could unlock a wide range of possibilities, including time series modality switching and question answering.
arXiv Detail & Related papers (2024-02-05T04:17:49Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
We propose AutoTimes as autoregressive time series forecasters, which independently projects time series segments into the embedding space and autoregressively generates future predictions with arbitrary lengths.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over 5 times training/inference speedup compared to advanced LLM-based forecasters.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - Large Language Models for Time Series: A Survey [34.24258745427964]
Large Language Models (LLMs) have seen significant use in domains such as natural language processing and computer vision.
LLMs present a significant potential for analysis of time series data, benefiting domains such as climate, IoT, healthcare, traffic, audio and finance.
arXiv Detail & Related papers (2024-02-02T07:24:35Z) - GATGPT: A Pre-trained Large Language Model with Graph Attention Network
for Spatiotemporal Imputation [19.371155159744934]
In real-world settings, such data often contain missing elements due to issues like sensor malfunctions and data transmission errors.
The objective oftemporal imputation is to estimate these missing values by understanding the inherent spatial and temporal relationships in the observed time series.
Traditionally, intricatetemporal imputation has relied on specific architectures, which suffer from limited applicability and high computational complexity.
In contrast our approach integrates pre-trained large language models (LLMs) into intricatetemporal imputation, introducing a groundbreaking framework, GATGPT.
arXiv Detail & Related papers (2023-11-24T08:15:11Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Novel Features for Time Series Analysis: A Complex Networks Approach [62.997667081978825]
Time series data are ubiquitous in several domains as climate, economics and health care.
Recent conceptual approach relies on time series mapping to complex networks.
Network analysis can be used to characterize different types of time series.
arXiv Detail & Related papers (2021-10-11T13:46:28Z) - Time Series Analysis via Network Science: Concepts and Algorithms [62.997667081978825]
This review provides a comprehensive overview of existing mapping methods for transforming time series into networks.
We describe the main conceptual approaches, provide authoritative references and give insight into their advantages and limitations in a unified notation and language.
Although still very recent, this research area has much potential and with this survey we intend to pave the way for future research on the topic.
arXiv Detail & Related papers (2021-10-11T13:33:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.