NUM2EVENT: Interpretable Event Reasoning from Numerical time-series
- URL: http://arxiv.org/abs/2510.23630v1
- Date: Fri, 24 Oct 2025 02:57:11 GMT
- Title: NUM2EVENT: Interpretable Event Reasoning from Numerical time-series
- Authors: Ninghui Feng, Yiyan Qi,
- Abstract summary: We introduce the task of number-to-event reasoning and decoding, which aims to infer interpretable structured events from numerical inputs.<n>To address the data scarcity and semantic alignment challenges, we propose a reasoning-aware framework.<n>Our model explicitly reasons over numerical changes, generates intermediate explanations, and outputs structured event hypotheses.
- Score: 6.45945124018154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have recently demonstrated impressive multimodal reasoning capabilities, yet their understanding of purely numerical time-series signals remains limited. Existing approaches mainly focus on forecasting or trend description, without uncovering the latent events that drive numerical changes or explaining the reasoning process behind them. In this work, we introduce the task of number-to-event reasoning and decoding, which aims to infer interpretable structured events from numerical inputs, even when current text is unavailable. To address the data scarcity and semantic alignment challenges, we propose a reasoning-aware framework that integrates an agent-guided event extractor (AGE), a marked multivariate Hawkes-based synthetic generator (EveDTS), and a two-stage fine-tuning pipeline combining a time-series encoder with a structured decoder. Our model explicitly reasons over numerical changes, generates intermediate explanations, and outputs structured event hypotheses. Experiments on multi-domain datasets show that our method substantially outperforms strong LLM baselines in event-level precision and recall. These results suggest a new direction for bridging quantitative reasoning and semantic understanding, enabling LLMs to explain and predict events directly from numerical dynamics.
Related papers
- UniT: Unified Multimodal Chain-of-Thought Test-time Scaling [85.590774707406]
Unified models can handle both multimodal understanding and generation within a single architecture, yet they typically operate in a single pass without iteratively refining their outputs.<n>We introduce UniT, a framework for multimodal test-time scaling that enables a single unified model to reason, verify, and refine across multiple rounds.
arXiv Detail & Related papers (2026-02-12T18:59:49Z) - Accelerate Speculative Decoding with Sparse Computation in Verification [49.74839681322316]
Speculative decoding accelerates autoregressive language model inference by verifying multiple draft tokens in parallel.<n>Existing sparsification methods are designed primarily for standard token-by-token autoregressive decoding.<n>We propose a sparse verification framework that jointly sparsifies attention, FFN, and MoE components during the verification stage to reduce the dominant computation cost.
arXiv Detail & Related papers (2025-12-26T07:53:41Z) - UniDiff: A Unified Diffusion Framework for Multimodal Time Series Forecasting [90.47915032778366]
We propose UniDiff, a unified diffusion framework for multimodal time series forecasting.<n>At its core lies a unified and parallel fusion module, where a single cross-attention mechanism integrates structural information from timestamps and semantic context from texts.<n>Experiments on real-world benchmark datasets across eight domains demonstrate that the proposed UniDiff model achieves state-of-the-art performance.
arXiv Detail & Related papers (2025-12-08T05:36:14Z) - AXIS: Explainable Time Series Anomaly Detection with Large Language Models [33.68487894996624]
AXIS is a framework that conditions a frozen Large Language Models (LLMs) for nuanced time-series understanding.<n>LLMs operate on discrete tokens and struggle to directly process long, continuous signals.<n>We introduce a new benchmark featuring multi-format questions and rationales that supervise contextual grounding and pattern-level semantics.
arXiv Detail & Related papers (2025-09-29T07:24:22Z) - Towards Explainable Sequential Learning [0.2318095974878009]
This paper offers a hybrid explainable temporal data processing pipeline, DataFul Explainable MultivariatE coRrelatIonal Temporal Artificial inTElligence (EMeriTAte+DF)
arXiv Detail & Related papers (2025-05-29T16:30:59Z) - TimeXL: Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop [79.5773512667468]
TimeXL is a multi-modal prediction framework that integrates a prototype-based time series encoder with three collaborating Large Language Models.<n>A reflection LLM compares the predicted values against the ground truth, identifying textual inconsistencies or noise.<n>This closed-loop workflow-prediction, critique (reflect), and refinement-continuously boosts the framework's performance and interpretability.
arXiv Detail & Related papers (2025-03-02T20:40:53Z) - TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents [52.13094810313054]
TimeCAP is a time-series processing framework that creatively employs Large Language Models (LLMs) as contextualizers of time series data.<n>TimeCAP incorporates two independent LLM agents: one generates a textual summary capturing the context of the time series, while the other uses this enriched summary to make more informed predictions.<n> Experimental results on real-world datasets demonstrate that TimeCAP outperforms state-of-the-art methods for time series event prediction.
arXiv Detail & Related papers (2025-02-17T04:17:27Z) - TempoGPT: Enhancing Time Series Reasoning via Quantizing Embedding [13.996105878417204]
We propose a multi-modal time series data construction approach and a multi-modal time series language model (TLM), TempoGPT.<n>We construct multi-modal data for complex reasoning tasks by analyzing the variable-system relationships within a white-box system.<n>Extensive experiments demonstrate that TempoGPT accurately perceives temporal information, logically infers conclusions, and achieves state-of-the-art in the constructed complex time series reasoning tasks.
arXiv Detail & Related papers (2025-01-13T13:47:05Z) - Domain-Oriented Time Series Inference Agents for Reasoning and Automated Analysis [19.649769354503658]
We introduce TS-Reasoner, a Domain-Oriented Time Series Agent that integrates natural language reasoning with precise numerical execution.<n>We evaluate its capabilities through two axes: basic time series understanding and complex multi-step inference.
arXiv Detail & Related papers (2024-10-05T06:04:19Z) - Confident Adaptive Language Modeling [95.45272377648773]
CALM is a framework for dynamically allocating different amounts of compute per input and generation timestep.
We demonstrate the efficacy of our framework in reducing compute -- potential speedup of up to $times 3$ -- while provably maintaining high performance.
arXiv Detail & Related papers (2022-07-14T17:00:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.