Unveiling the Potential of Sentiment: Can Large Language Models Predict Chinese Stock Price Movements?
- URL: http://arxiv.org/abs/2306.14222v2
- Date: Sat, 4 May 2024 13:02:32 GMT
- Title: Unveiling the Potential of Sentiment: Can Large Language Models Predict Chinese Stock Price Movements?
- Authors: Haohan Zhang, Fengrui Hua, Chengjin Xu, Hao Kong, Ruiting Zuo, Jian Guo,
- Abstract summary: This paper introduces a standardized experimental procedure for comprehensive evaluations of Large Language Models.
We detail the methodology using three distinct LLMs, each embodying a unique approach to performance enhancement.
We develop quantitative trading strategies using these sentiment factors and conduct back-tests in realistic scenarios.
- Score: 13.682396634686159
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of Large Language Models (LLMs) has spurred discussions about their potential to enhance quantitative trading strategies. LLMs excel in analyzing sentiments about listed companies from financial news, providing critical insights for trading decisions. However, the performance of LLMs in this task varies substantially due to their inherent characteristics. This paper introduces a standardized experimental procedure for comprehensive evaluations. We detail the methodology using three distinct LLMs, each embodying a unique approach to performance enhancement, applied specifically to the task of sentiment factor extraction from large volumes of Chinese news summaries. Subsequently, we develop quantitative trading strategies using these sentiment factors and conduct back-tests in realistic scenarios. Our results will offer perspectives about the performances of Large Language Models applied to extracting sentiments from Chinese news texts.
Related papers
- TradExpert: Revolutionizing Trading with Mixture of Expert LLMs [25.243258134817054]
TradeExpert is a novel framework that employs a mix of experts (MoE) approach, using four specialized LLMs.
Our experimental results demonstrate TradeExpert's superior performance across all trading scenarios.
arXiv Detail & Related papers (2024-10-16T20:24:16Z) - LLMFactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction [5.519288891583653]
We introduce a novel framework called LLMFactor to identify factors that influence stock movements.
Unlike previous methods that relied on keyphrases or sentiment analysis, this approach focuses on extracting factors more directly related to stock market dynamics.
Our framework directs the LLMs to create background knowledge through a fill-in-the-blank strategy and then discerns potential factors affecting stock prices from related news.
arXiv Detail & Related papers (2024-06-16T06:20:50Z) - Analyzing and Adapting Large Language Models for Few-Shot Multilingual
NLU: Are We There Yet? [82.02076369811402]
Supervised fine-tuning (SFT), supervised instruction tuning (SIT) and in-context learning (ICL) are three alternative, de facto standard approaches to few-shot learning.
We present an extensive and systematic comparison of the three approaches, testing them on 6 high- and low-resource languages, three different NLU tasks, and a myriad of language and domain setups.
Our observations show that supervised instruction tuning has the best trade-off between performance and resource requirements.
arXiv Detail & Related papers (2024-03-04T10:48:13Z) - Learning to Generate Explainable Stock Predictions using Self-Reflective
Large Language Models [54.21695754082441]
We propose a framework to teach Large Language Models (LLMs) to generate explainable stock predictions.
A reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations.
Our framework can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient.
arXiv Detail & Related papers (2024-02-06T03:18:58Z) - Integrating Stock Features and Global Information via Large Language
Models for Enhanced Stock Return Prediction [5.762650600435391]
We propose a novel framework consisting of two components to surmount the challenges of integrating Large Language Models with existing quantitative models.
We have demonstrated superior performance in Rank Information Coefficient and returns, particularly compared to models relying only on stock features in the China A-share market.
arXiv Detail & Related papers (2023-10-09T11:34:18Z) - CMMLU: Measuring massive multitask language understanding in Chinese [133.70911295934746]
This paper introduces a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities.
CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
arXiv Detail & Related papers (2023-06-15T15:49:51Z) - Sentiment Analysis in the Era of Large Language Models: A Reality Check [69.97942065617664]
This paper investigates the capabilities of large language models (LLMs) in performing various sentiment analysis tasks.
We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets.
arXiv Detail & Related papers (2023-05-24T10:45:25Z) - Introspective Tips: Large Language Model for In-Context Decision Making [48.96711664648164]
We employ Introspective Tips" to facilitate large language models (LLMs) in self-optimizing their decision-making.
Our method enhances the agent's performance in both few-shot and zero-shot learning situations.
Experiments involving over 100 games in TextWorld illustrate the superior performance of our approach.
arXiv Detail & Related papers (2023-05-19T11:20:37Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.