Reasoning on Time-Series for Financial Technical Analysis
- URL: http://arxiv.org/abs/2511.08616v1
- Date: Thu, 13 Nov 2025 01:00:45 GMT
- Title: Reasoning on Time-Series for Financial Technical Analysis
- Authors: Kelvin J. L. Koa, Jan Chen, Yunshan Ma, Huanhuan Zheng, Tat-Seng Chua,
- Abstract summary: We introduce Verbal Technical Analysis (VTA), a novel framework that combine verbal and latent reasoning to produce stock time-series forecasts.<n> Experiments on stock datasets across U.S., Chinese, and European markets show that VTA achieves state-of-the-art forecasting accuracy.
- Score: 45.81831399666851
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While Large Language Models have been used to produce interpretable stock forecasts, they mainly focus on analyzing textual reports but not historical price data, also known as Technical Analysis. This task is challenging as it switches between domains: the stock price inputs and outputs lie in the time-series domain, while the reasoning step should be in natural language. In this work, we introduce Verbal Technical Analysis (VTA), a novel framework that combine verbal and latent reasoning to produce stock time-series forecasts that are both accurate and interpretable. To reason over time-series, we convert stock price data into textual annotations and optimize the reasoning trace using an inverse Mean Squared Error (MSE) reward objective. To produce time-series outputs from textual reasoning, we condition the outputs of a time-series backbone model on the reasoning-based attributes. Experiments on stock datasets across U.S., Chinese, and European markets show that VTA achieves state-of-the-art forecasting accuracy, while the reasoning traces also perform well on evaluation by industry experts.
Related papers
- Unlocking Reasoning Capability on Machine Translation in Large Language Models [57.60641851466707]
Reasoning-oriented large language models (RLMs) achieve strong gains on tasks such as mathematics and coding by generating explicit intermediate reasoning.<n>We systematically evaluate several open- and closed-weights RLMs on the WMT24++ benchmark.<n>We find that enabling explicit reasoning consistently degrades translation quality across languages and models.
arXiv Detail & Related papers (2026-02-16T14:05:59Z) - Forecasting Future Language: Context Design for Mention Markets [81.25011140991566]
We study how input context should be designed to support accurate prediction in mention markets.<n>We find three insights: (1) richer context consistently improves forecasting performance; (2) market-conditioned prompting (MCP) treats the market probability as a prior and updates it using textual evidence, yields better-calibrated forecasts; and (3) a mixture of the market probability and MCP (MixMCP) outperforms the market baseline.
arXiv Detail & Related papers (2026-02-04T12:43:31Z) - RETuning: Upgrading Inference-Time Scaling for Stock Movement Prediction with Large Language Models [37.97736341087795]
We study a three-class classification problem (up, hold, down) and observe that large language models (LLMs) follow analysts' opinions rather than exhibit a systematic, independent analytical logic (CoTs)<n>We propose Reflective Evidence Tuning (RETuning), a cold-start method prior to reinforcement learning, to enhance prediction ability.<n>We build a large-scale dataset spanning all of 2024 for 5,123 A-share stocks, with long contexts (32K tokens) and over 200K samples.
arXiv Detail & Related papers (2025-10-24T16:08:33Z) - VISTA: Vision-Language Inference for Training-Free Stock Time-Series Analysis [0.0]
We introduce VISTA (Vision-Language Inference for Stock Time-series Analysis), a training-free framework for multi-modal stock forecasting.<n>We benchmark VISTA against standard baselines, including ARIMA and text-only LLM-based prompting methods.<n>We show that VISTA outperforms these baselines by up to 89.83%, demonstrating the effectiveness of multi-modal inference for stock time-series analysis.
arXiv Detail & Related papers (2025-05-24T07:20:14Z) - BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.<n>We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.<n>We propose a simple yet effective LLM prompting method that outperforms all other tested methods on our benchmark.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - StockTime: A Time Series Specialized Large Language Model Architecture for Stock Price Prediction [13.52020491768311]
We introduce StockTime, a novel LLM-based architecture designed specifically for stock price time series data.
Unlike recent FinLLMs, StockTime is specifically designed for stock price time series data.
By fusing this multimodal data, StockTime effectively predicts stock prices across arbitrary look-back periods.
arXiv Detail & Related papers (2024-08-25T00:50:33Z) - LLMFactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction [5.519288891583653]
We introduce a novel framework called LLMFactor to identify factors that influence stock movements.
Unlike previous methods that relied on keyphrases or sentiment analysis, this approach focuses on extracting factors more directly related to stock market dynamics.
Our framework directs the LLMs to create background knowledge through a fill-in-the-blank strategy and then discerns potential factors affecting stock prices from related news.
arXiv Detail & Related papers (2024-06-16T06:20:50Z) - Natural Language Processing and Multimodal Stock Price Prediction [0.8702432681310401]
This paper utilizes stock percentage change as training data, in contrast to the traditional use of raw currency values.
The choice of percentage change aims to provide models with context regarding the significance of price fluctuations.
The study employs specialized BERT natural language processing models to predict stock price trends.
arXiv Detail & Related papers (2024-01-03T01:21:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.