Integrating Stock Features and Global Information via Large Language
Models for Enhanced Stock Return Prediction
- URL: http://arxiv.org/abs/2310.05627v1
- Date: Mon, 9 Oct 2023 11:34:18 GMT
- Title: Integrating Stock Features and Global Information via Large Language
Models for Enhanced Stock Return Prediction
- Authors: Yujie Ding, Shuai Jia, Tianyi Ma, Bingcheng Mao, Xiuze Zhou, Liuliu Li
and Dongming Han
- Abstract summary: We propose a novel framework consisting of two components to surmount the challenges of integrating Large Language Models with existing quantitative models.
We have demonstrated superior performance in Rank Information Coefficient and returns, particularly compared to models relying only on stock features in the China A-share market.
- Score: 5.762650600435391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The remarkable achievements and rapid advancements of Large Language Models
(LLMs) such as ChatGPT and GPT-4 have showcased their immense potential in
quantitative investment. Traders can effectively leverage these LLMs to analyze
financial news and predict stock returns accurately. However, integrating LLMs
into existing quantitative models presents two primary challenges: the
insufficient utilization of semantic information embedded within LLMs and the
difficulties in aligning the latent information within LLMs with pre-existing
quantitative stock features. We propose a novel framework consisting of two
components to surmount these challenges. The first component, the Local-Global
(LG) model, introduces three distinct strategies for modeling global
information. These approaches are grounded respectively on stock features, the
capabilities of LLMs, and a hybrid method combining the two paradigms. The
second component, Self-Correlated Reinforcement Learning (SCRL), focuses on
aligning the embeddings of financial news generated by LLMs with stock features
within the same semantic space. By implementing our framework, we have
demonstrated superior performance in Rank Information Coefficient and returns,
particularly compared to models relying only on stock features in the China
A-share market.
Related papers
- TradExpert: Revolutionizing Trading with Mixture of Expert LLMs [25.243258134817054]
TradeExpert is a novel framework that employs a mix of experts (MoE) approach, using four specialized LLMs.
Our experimental results demonstrate TradeExpert's superior performance across all trading scenarios.
arXiv Detail & Related papers (2024-10-16T20:24:16Z) - Automate Strategy Finding with LLM in Quant investment [4.46212317245124]
We propose a novel framework for quantitative stock investment in portfolio management and alpha mining.
This paper proposes a framework where large language models (LLMs) mine alpha factors from multimodal financial data.
Experiments on the Chinese stock markets demonstrate that this framework significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-09-10T07:42:28Z) - CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications [10.225210627594894]
This paper presents our solution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs within three critical areas of financial tasks.
Financial classification, financial text summarization, and single stock trading are investigated.
Our approach aims to tackle these diverse tasks in a comprehensive and integrated manner, showcasing LLMs' capacity to address diverse and complex financial tasks with improved accuracy and decision-making capabilities.
arXiv Detail & Related papers (2024-07-02T05:04:13Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Knowledge Fusion of Large Language Models [73.28202188100646]
This paper introduces the notion of knowledge fusion for large language models (LLMs)
We externalize their collective knowledge and unique strengths, thereby elevating the capabilities of the target model beyond those of any individual source LLM.
Our findings confirm that the fusion of LLMs can improve the performance of the target model across a range of capabilities such as reasoning, commonsense, and code generation.
arXiv Detail & Related papers (2024-01-19T05:02:46Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - ChEF: A Comprehensive Evaluation Framework for Standardized Assessment
of Multimodal Large Language Models [49.48109472893714]
Multimodal Large Language Models (MLLMs) have shown impressive abilities in interacting with visual content with myriad potential downstream tasks.
We present the first Comprehensive Evaluation Framework (ChEF) that can holistically profile each MLLM and fairly compare different MLLMs.
We will publicly release all the detailed implementations for further analysis, as well as an easy-to-use modular toolkit for the integration of new recipes and models.
arXiv Detail & Related papers (2023-11-05T16:01:40Z) - Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty [52.72790059506241]
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text.
Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks.
arXiv Detail & Related papers (2023-09-07T01:35:24Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.