Natural Language Processing and Multimodal Stock Price Prediction
- URL: http://arxiv.org/abs/2401.01487v1
- Date: Wed, 3 Jan 2024 01:21:30 GMT
- Title: Natural Language Processing and Multimodal Stock Price Prediction
- Authors: Kevin Taylor and Jerry Ng
- Abstract summary: This paper utilizes stock percentage change as training data, in contrast to the traditional use of raw currency values.
The choice of percentage change aims to provide models with context regarding the significance of price fluctuations.
The study employs specialized BERT natural language processing models to predict stock price trends.
- Score: 0.8702432681310401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of financial decision-making, predicting stock prices is
pivotal. Artificial intelligence techniques such as long short-term memory
networks (LSTMs), support-vector machines (SVMs), and natural language
processing (NLP) models are commonly employed to predict said prices. This
paper utilizes stock percentage change as training data, in contrast to the
traditional use of raw currency values, with a focus on analyzing publicly
released news articles. The choice of percentage change aims to provide models
with context regarding the significance of price fluctuations and overall price
change impact on a given stock. The study employs specialized BERT natural
language processing models to predict stock price trends, with a particular
emphasis on various data modalities. The results showcase the capabilities of
such strategies with a small natural language processing model to accurately
predict overall stock trends, and highlight the effectiveness of certain data
features and sector-specific data.
Related papers
- Using Large Language Models for Expert Prior Elicitation in Predictive Modelling [53.54623137152208]
This study proposes using large language models (LLMs) to elicit expert prior distributions for predictive models.
We compare LLM-elicited and uninformative priors, evaluate whether LLMs truthfully generate parameter distributions, and propose a model selection strategy for in-context learning and prior elicitation.
Our findings show that LLM-elicited prior parameter distributions significantly reduce predictive error compared to uninformative priors in low-data settings.
arXiv Detail & Related papers (2024-11-26T10:13:39Z) - BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - News-Driven Stock Price Forecasting in Indian Markets: A Comparative Study of Advanced Deep Learning Models [0.0]
Recent advancements in artificial intelligence (AI) and natural language processing (NLP) have significantly enhanced stock price prediction capabilities.
We leverage 30 years of historical data from national banks in India, sourced from the National Stock Exchange, to forecast stock prices.
Our approach utilizes state-of-the-art deep learning models, including multivariate multi-step Long Short-Term Memory (LSTM), Facebook Prophet with LightGBM optimized through Optuna, and Seasonal Auto-Regressive Integrated Moving Average (SARIMA)
arXiv Detail & Related papers (2024-10-14T15:30:06Z) - Harnessing Earnings Reports for Stock Predictions: A QLoRA-Enhanced LLM Approach [6.112119533910774]
This paper introduces an advanced approach by employing Large Language Models (LLMs) instruction fine-tuned with a novel combination of instruction-based techniques and quantized low-rank adaptation (QLoRA) compression.
Our methodology integrates 'base factors', such as financial metric growth and earnings transcripts, with 'external factors', including recent market indices performances and analyst grades, to create a rich, supervised dataset.
This study not only demonstrates the power of integrating cutting-edge AI with fine-tuned financial data but also paves the way for future research in enhancing AI-driven financial analysis tools.
arXiv Detail & Related papers (2024-08-13T04:53:31Z) - American Option Pricing using Self-Attention GRU and Shapley Value
Interpretation [0.0]
We propose a machine learning method for forecasting the prices of SPY (ETF) option based on gated recurrent unit (GRU) and self-attention mechanism.
We built four different machine learning models, including multilayer perceptron (MLP), long short-term memory (LSTM), self-attention LSTM, and self-attention GRU.
arXiv Detail & Related papers (2023-10-19T06:05:46Z) - Predicting Financial Market Trends using Time Series Analysis and
Natural Language Processing [0.0]
This study was to assess the viability of Twitter sentiments as a tool for predicting stock prices of major corporations such as Tesla, Apple.
Our findings indicate that positivity, negativity, and subjectivity are the primary determinants of fluctuations in stock prices.
arXiv Detail & Related papers (2023-08-31T21:20:58Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Forecasting Cryptocurrency Returns from Sentiment Signals: An Analysis
of BERT Classifiers and Weak Supervision [6.624726878647541]
We introduce weak learning, a recently proposed NLP approach to address the problem that text data is unlabeled.
We confirm that finetuning using weak labels enhances the predictive value of text-based features and raises forecast accuracy in the context of predicting cryptocurrency returns.
More fundamentally, the modeling paradigm we present, weak labeling domain-specific text and finetuning pretrained NLP models, is universally applicable in (financial) forecasting.
arXiv Detail & Related papers (2022-04-06T07:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.