Realised Volatility Forecasting: Machine Learning via Financial Word Embedding
- URL: http://arxiv.org/abs/2108.00480v4
- Date: Tue, 19 Nov 2024 17:33:22 GMT
- Title: Realised Volatility Forecasting: Machine Learning via Financial Word Embedding
- Authors: Eghbal Rahimikia, Stefan Zohren, Ser-Huang Poon,
- Abstract summary: This study develops a financial word embedding using 15 years of business news.
As an application, we incorporate this word embedding into a simple machine learning model to enhance the HAR model for forecasting realised volatility.
- Score: 4.2193475197905705
- License:
- Abstract: This study develops a financial word embedding using 15 years of business news. Our results show that this specialised language model produces more accurate results than general word embeddings, based on a financial benchmark we established. As an application, we incorporate this word embedding into a simple machine learning model to enhance the HAR model for forecasting realised volatility. This approach statistically and economically outperforms established econometric models. Using an explainable AI method, we also identify key phrases in business news that contribute significantly to volatility, offering insights into language patterns tied to market dynamics.
Related papers
- Explainable Artificial Intelligence for identifying profitability predictors in Financial Statements [0.7067443325368975]
We apply Machine Learning techniques to raw financial statements data taken from AIDA, a Database comprising Italian listed companies' data from 2013 to 2022.
We present a comparative study of different models and following the European AI regulations, we complement our analysis by applying explainability techniques to the proposed models.
arXiv Detail & Related papers (2025-01-29T14:33:23Z) - BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - Harnessing Earnings Reports for Stock Predictions: A QLoRA-Enhanced LLM Approach [6.112119533910774]
This paper introduces an advanced approach by employing Large Language Models (LLMs) instruction fine-tuned with a novel combination of instruction-based techniques and quantized low-rank adaptation (QLoRA) compression.
Our methodology integrates 'base factors', such as financial metric growth and earnings transcripts, with 'external factors', including recent market indices performances and analyst grades, to create a rich, supervised dataset.
This study not only demonstrates the power of integrating cutting-edge AI with fine-tuned financial data but also paves the way for future research in enhancing AI-driven financial analysis tools.
arXiv Detail & Related papers (2024-08-13T04:53:31Z) - Distilled ChatGPT Topic & Sentiment Modeling with Applications in
Finance [0.0]
ChatGPT is utilized to create streamlined models that generate easily interpretable features.
We detail a training approach that merges knowledge distillation and transfer learning, resulting in lightweight topic and sentiment classification models.
arXiv Detail & Related papers (2024-03-04T16:27:21Z) - FinGPT: Instruction Tuning Benchmark for Open-Source Large Language
Models in Financial Datasets [9.714447724811842]
This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models.
We capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration.
The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression.
arXiv Detail & Related papers (2023-10-07T12:52:58Z) - RAVEN: In-Context Learning with Retrieval-Augmented Encoder-Decoder Language Models [57.12888828853409]
RAVEN is a model that combines retrieval-augmented masked language modeling and prefix language modeling.
Fusion-in-Context Learning enables the model to leverage more in-context examples without requiring additional training.
Our work underscores the potential of retrieval-augmented encoder-decoder language models for in-context learning.
arXiv Detail & Related papers (2023-08-15T17:59:18Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Better Language Model with Hypernym Class Prediction [101.8517004687825]
Class-based language models (LMs) have been long devised to address context sparsity in $n$-gram LMs.
In this study, we revisit this approach in the context of neural LMs.
arXiv Detail & Related papers (2022-03-21T01:16:44Z) - Stock Embeddings: Learning Distributed Representations for Financial
Assets [11.67728795230542]
We propose a neural model for training stock embeddings, which harnesses the dynamics of historical returns data.
We describe our approach in detail and discuss a number of ways that it can be used in the financial domain.
arXiv Detail & Related papers (2022-02-14T15:39:06Z) - Masked Language Modeling and the Distributional Hypothesis: Order Word
Matters Pre-training for Little [74.49773960145681]
A possible explanation for the impressive performance of masked language model (MLM)-training is that such models have learned to represent the syntactic structures prevalent in NLP pipelines.
In this paper, we propose a different explanation: pre-trains succeed on downstream tasks almost entirely due to their ability to model higher-order word co-occurrence statistics.
Our results show that purely distributional information largely explains the success of pre-training, and underscore the importance of curating challenging evaluation datasets that require deeper linguistic knowledge.
arXiv Detail & Related papers (2021-04-14T06:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.