Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models
- URL: http://arxiv.org/abs/2304.07619v5
- Date: Wed, 11 Sep 2024 21:23:04 GMT
- Title: Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models
- Authors: Alejandro Lopez-Lira, Yuehua Tang,
- Abstract summary: We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
- Score: 51.3422222472898
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines, even without direct financial training. ChatGPT scores significantly predict out-of-sample daily stock returns, subsuming traditional methods, and predictability is stronger among smaller stocks and following negative news. To explain these findings, we develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs. The model generates several key predictions, which we empirically test: (i) it establishes a critical threshold in AI capabilities necessary for profitable predictions, (ii) it demonstrates that only advanced LLMs can effectively interpret complex information, and (iii) it predicts that widespread LLM adoption can enhance market efficiency. Our results suggest that sophisticated return forecasting is an emerging capability of AI systems and that these technologies can alter information diffusion and decision-making processes in financial markets. Finally, we introduce an interpretability framework to evaluate LLMs' reasoning, contributing to AI transparency and economic decision-making.
Related papers
- BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - Harnessing Earnings Reports for Stock Predictions: A QLoRA-Enhanced LLM Approach [6.112119533910774]
This paper introduces an advanced approach by employing Large Language Models (LLMs) instruction fine-tuned with a novel combination of instruction-based techniques and quantized low-rank adaptation (QLoRA) compression.
Our methodology integrates 'base factors', such as financial metric growth and earnings transcripts, with 'external factors', including recent market indices performances and analyst grades, to create a rich, supervised dataset.
This study not only demonstrates the power of integrating cutting-edge AI with fine-tuned financial data but also paves the way for future research in enhancing AI-driven financial analysis tools.
arXiv Detail & Related papers (2024-08-13T04:53:31Z) - Monetizing Currency Pair Sentiments through LLM Explainability [2.572906392867547]
Large language models (LLMs) play a vital role in almost every domain in today's organizations.
We contribute a novel technique to leverage LLMs as a post-hoc model-independent tool for the explainability of sentiment analysis.
We apply our technique in the financial domain for currency-pair price predictions using open news feed data merged with market prices.
arXiv Detail & Related papers (2024-07-29T11:58:54Z) - Financial Statement Analysis with Large Language Models [0.0]
We provide standardized and anonymous financial statements to GPT4 and instruct the model to analyze them.
The model outperforms financial analysts in its ability to predict earnings changes directionally.
Our trading strategies based on GPT's predictions yield a higher Sharpe ratio and alphas than strategies based on other models.
arXiv Detail & Related papers (2024-07-25T08:36:58Z) - AlphaFin: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework [48.3060010653088]
We release AlphaFin datasets, combining traditional research datasets, real-time financial data, and handwritten chain-of-thought (CoT) data.
We then use AlphaFin datasets to benchmark a state-of-the-art method, called Stock-Chain, for effectively tackling the financial analysis task.
arXiv Detail & Related papers (2024-03-19T09:45:33Z) - Ploutos: Towards interpretable stock movement prediction with financial
large language model [43.51934592920784]
Ploutos is a novel financial framework that consists of PloutosGen and PloutosGPT.
The PloutosGen contains multiple primary experts that can analyze different modal data, such as text and numbers, and provide quantitative strategies from different perspectives.
The training strategy of PloutosGPT leverage rearview-mirror prompting mechanism to guide GPT-4 to generate rationales, and a dynamic token weighting mechanism to finetune LLM.
arXiv Detail & Related papers (2024-02-18T10:28:18Z) - Learning to Generate Explainable Stock Predictions using Self-Reflective
Large Language Models [54.21695754082441]
We propose a framework to teach Large Language Models (LLMs) to generate explainable stock predictions.
A reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations.
Our framework can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient.
arXiv Detail & Related papers (2024-02-06T03:18:58Z) - Enhancing Financial Sentiment Analysis via Retrieval Augmented Large
Language Models [11.154814189699735]
Large Language Models (LLMs) pre-trained on extensive corpora have demonstrated superior performance across various NLP tasks.
We introduce a retrieval-augmented LLMs framework for financial sentiment analysis.
Our approach achieves 15% to 48% performance gain in accuracy and F1 score.
arXiv Detail & Related papers (2023-10-06T05:40:23Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.