Temporal Data Meets LLM -- Explainable Financial Time Series Forecasting
- URL: http://arxiv.org/abs/2306.11025v1
- Date: Mon, 19 Jun 2023 15:42:02 GMT
- Title: Temporal Data Meets LLM -- Explainable Financial Time Series Forecasting
- Authors: Xinli Yu, Zheng Chen, Yuan Ling, Shujing Dong, Zongyi Liu, Yanbin Lu
- Abstract summary: We focus on NASDAQ-100 stocks, making use of publicly accessible historical stock price data, company metadata, and historical economic/financial news.
We show that a publicly available LLM such as Open-LLaMA, after fine-tuning, can comprehend the instruction to generate explainable forecasts.
- Score: 7.485041391778341
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a novel study on harnessing Large Language Models' (LLMs)
outstanding knowledge and reasoning abilities for explainable financial time
series forecasting. The application of machine learning models to financial
time series comes with several challenges, including the difficulty in
cross-sequence reasoning and inference, the hurdle of incorporating multi-modal
signals from historical news, financial knowledge graphs, etc., and the issue
of interpreting and explaining the model results. In this paper, we focus on
NASDAQ-100 stocks, making use of publicly accessible historical stock price
data, company metadata, and historical economic/financial news. We conduct
experiments to illustrate the potential of LLMs in offering a unified solution
to the aforementioned challenges. Our experiments include trying
zero-shot/few-shot inference with GPT-4 and instruction-based fine-tuning with
a public LLM model Open LLaMA. We demonstrate our approach outperforms a few
baselines, including the widely applied classic ARMA-GARCH model and a
gradient-boosting tree model. Through the performance comparison results and a
few examples, we find LLMs can make a well-thought decision by reasoning over
information from both textual news and price time series and extracting
insights, leveraging cross-sequence information, and utilizing the inherent
knowledge embedded within the LLM. Additionally, we show that a publicly
available LLM such as Open-LLaMA, after fine-tuning, can comprehend the
instruction to generate explainable forecasts and achieve reasonable
performance, albeit relatively inferior in comparison to GPT-4.
Related papers
- LLMFactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction [5.519288891583653]
We introduce a novel framework called LLMFactor to identify factors that influence stock movements.
Unlike previous methods that relied on keyphrases or sentiment analysis, this approach focuses on extracting factors more directly related to stock market dynamics.
Our framework directs the LLMs to create background knowledge through a fill-in-the-blank strategy and then discerns potential factors affecting stock prices from related news.
arXiv Detail & Related papers (2024-06-16T06:20:50Z) - AlphaFin: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework [48.3060010653088]
We release AlphaFin datasets, combining traditional research datasets, real-time financial data, and handwritten chain-of-thought (CoT) data.
We then use AlphaFin datasets to benchmark a state-of-the-art method, called Stock-Chain, for effectively tackling the financial analysis task.
arXiv Detail & Related papers (2024-03-19T09:45:33Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - Learning to Generate Explainable Stock Predictions using Self-Reflective
Large Language Models [54.21695754082441]
We propose a framework to teach Large Language Models (LLMs) to generate explainable stock predictions.
A reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations.
Our framework can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient.
arXiv Detail & Related papers (2024-02-06T03:18:58Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - Chain of History: Learning and Forecasting with LLMs for Temporal
Knowledge Graph Completion [24.545917737620197]
Temporal Knowledge Graph Completion (TKGC) is a complex task involving the prediction of missing event links at future timestamps.
This paper aims to provide a comprehensive perspective on harnessing the advantages of Large Language Models for reasoning in temporal knowledge graphs.
arXiv Detail & Related papers (2024-01-11T17:42:47Z) - Enhancing Financial Sentiment Analysis via Retrieval Augmented Large
Language Models [11.154814189699735]
Large Language Models (LLMs) pre-trained on extensive corpora have demonstrated superior performance across various NLP tasks.
We introduce a retrieval-augmented LLMs framework for financial sentiment analysis.
Our approach achieves 15% to 48% performance gain in accuracy and F1 score.
arXiv Detail & Related papers (2023-10-06T05:40:23Z) - PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark
for Finance [63.51545277822702]
PIXIU is a comprehensive framework including the first financial large language model (LLMs) based on fine-tuning LLaMA with instruction data.
We propose FinMA by fine-tuning LLaMA with the constructed dataset to be able to follow instructions for various financial tasks.
We conduct a detailed analysis of FinMA and several existing LLMs, uncovering their strengths and weaknesses in handling critical financial tasks.
arXiv Detail & Related papers (2023-06-08T14:20:29Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - What do LLMs Know about Financial Markets? A Case Study on Reddit Market
Sentiment Analysis [15.195505464654493]
Market sentiment analysis on social media content requires knowledge of both financial markets and social media jargon.
Our pipeline generates weak financial sentiment labels for Reddit posts with a large language model (LLM)
With only a handful of prompts, the final model performs on par with existing supervised models.
arXiv Detail & Related papers (2022-12-21T19:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.