QuAnTS: Question Answering on Time Series
- URL: http://arxiv.org/abs/2511.05124v1
- Date: Fri, 07 Nov 2025 10:07:03 GMT
- Title: QuAnTS: Question Answering on Time Series
- Authors: Felix Divo, Maurice Kraus, Anh Q. Nguyen, Hao Xue, Imran Razzak, Flora D. Salim, Kristian Kersting, Devendra Singh Dhami,
- Abstract summary: We propose a novel time series QA dataset, QuAnTS, for Question Answering on Time Series data.<n>We pose a wide variety of questions and answers about human motion in the form of tracked skeleton trajectories.<n>We verify that the large-scale QuAnTS dataset is well-formed and comprehensive through extensive experiments.
- Score: 50.91478742616324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text offers intuitive access to information. This can, in particular, complement the density of numerical time series, thereby allowing improved interactions with time series models to enhance accessibility and decision-making. While the creation of question-answering datasets and models has recently seen remarkable growth, most research focuses on question answering (QA) on vision and text, with time series receiving minute attention. To bridge this gap, we propose a challenging novel time series QA (TSQA) dataset, QuAnTS, for Question Answering on Time Series data. Specifically, we pose a wide variety of questions and answers about human motion in the form of tracked skeleton trajectories. We verify that the large-scale QuAnTS dataset is well-formed and comprehensive through extensive experiments. Thoroughly evaluating existing and newly proposed baselines then lays the groundwork for a deeper exploration of TSQA using QuAnTS. Additionally, we provide human performances as a key reference for gauging the practical usability of such models. We hope to encourage future research on interacting with time series models through text, enabling better decision-making and more transparent systems.
Related papers
- Time-MQA: Time Series Multi-Task Question Answering with Context Enhancement [55.2439260314328]
Time Series Multi-Task Question Answering (Time-MQA) is a unified framework that enables natural language queries across multiple time series tasks.<n>Central to Time-MQA is the TSQA dataset, a large-scale dataset containing $sim $200k question-answer pairs.
arXiv Detail & Related papers (2025-02-26T13:47:13Z) - Language in the Flow of Time: Time-Series-Paired Texts Weaved into a Unified Temporal Narrative [65.84249211767921]
Texts as Time Series (TaTS) can be plugged into any existing numerical-only time series models.<n>We show that TaTS can enhance predictive performance without modifying model architectures.
arXiv Detail & Related papers (2025-02-13T03:43:27Z) - Domain-Oriented Time Series Inference Agents for Reasoning and Automated Analysis [19.649769354503658]
We introduce TS-Reasoner, a Domain-Oriented Time Series Agent that integrates natural language reasoning with precise numerical execution.<n>We evaluate its capabilities through two axes: basic time series understanding and complex multi-step inference.
arXiv Detail & Related papers (2024-10-05T06:04:19Z) - Deep Time Series Models: A Comprehensive Survey and Benchmark [60.742416934632416]
Time series present unique challenges due to their intricate and dynamic nature.<n>Recent years have witnessed remarkable breakthroughs in the time series community.<n>We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - Continual Learning for Temporal-Sensitive Question Answering [12.76582814745124]
In real-world applications, it's crucial for models to continually acquire knowledge over time, rather than relying on a static, complete dataset.
Our paper investigates strategies that enable models to adapt to the ever-evolving information landscape.
We propose a training framework for CLTSQA that integrates temporal memory replay and temporal contrastive learning.
arXiv Detail & Related papers (2024-07-17T10:47:43Z) - Empowering Time Series Analysis with Foundation Models: A Comprehensive Survey [32.794229758722985]
Time series data are ubiquitous across diverse real-world applications.<n>Traditional approaches are largely task-specific, offering limited functionality and poor transferability.<n>Foundation models have revolutionized NLP and CV with their remarkable cross-task transferability.
arXiv Detail & Related papers (2024-05-03T03:12:55Z) - UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large Models [55.22048505787125]
This paper contributes a comprehensive dataset, called UNK-VQA.
We first augment the existing data via deliberate perturbations on either the image or question.
We then extensively evaluate the zero- and few-shot performance of several emerging multi-modal large models.
arXiv Detail & Related papers (2023-10-17T02:38:09Z) - A Dataset for Answering Time-Sensitive Questions [88.95075983560331]
Time is an important dimension in our physical world. Lots of facts can evolve with respect to time.
It is important to consider the time dimension and empower the existing QA models to reason over time.
The existing QA datasets contain rather few time-sensitive questions, hence not suitable for diagnosing or benchmarking the model's temporal reasoning capability.
arXiv Detail & Related papers (2021-08-13T16:42:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.