Multi-Transformer: A New Neural Network-Based Architecture for
Forecasting S&P Volatility
- URL: http://arxiv.org/abs/2109.12621v1
- Date: Sun, 26 Sep 2021 14:47:04 GMT
- Title: Multi-Transformer: A New Neural Network-Based Architecture for
Forecasting S&P Volatility
- Authors: Eduardo Ramos-P\'erez, Pablo J. Alonso-Gonz\'alez, Jos\'e Javier
N\'u\~nez-Vel\'azquez
- Abstract summary: This paper proposes more accurate stock volatility models based on machine and deep learning techniques.
This paper introduces a neural network-based architecture, called Multi-Transformer.
The paper also adapts traditional Transformer layers in order to be used in volatility forecasting models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Events such as the Financial Crisis of 2007-2008 or the COVID-19 pandemic
caused significant losses to banks and insurance entities. They also
demonstrated the importance of using accurate equity risk models and having a
risk management function able to implement effective hedging strategies. Stock
volatility forecasts play a key role in the estimation of equity risk and,
thus, in the management actions carried out by financial institutions.
Therefore, this paper has the aim of proposing more accurate stock volatility
models based on novel machine and deep learning techniques. This paper
introduces a neural network-based architecture, called Multi-Transformer.
Multi-Transformer is a variant of Transformer models, which have already been
successfully applied in the field of natural language processing. Indeed, this
paper also adapts traditional Transformer layers in order to be used in
volatility forecasting models. The empirical results obtained in this paper
suggest that the hybrid models based on Multi-Transformer and Transformer
layers are more accurate and, hence, they lead to more appropriate risk
measures than other autoregressive algorithms or hybrid models based on feed
forward layers or long short term memory cells.
Related papers
- Leveraging Convolutional Neural Network-Transformer Synergy for Predictive Modeling in Risk-Based Applications [5.914777314371152]
This paper proposes a deep learning model based on the combination of convolutional neural networks (CNN) and Transformer for credit user default prediction.
The results show that the CNN+Transformer model outperforms traditional machine learning models, such as random forests and XGBoost.
This study provides a new idea for credit default prediction and provides strong support for risk assessment and intelligent decision-making in the financial field.
arXiv Detail & Related papers (2024-12-24T07:07:14Z) - STORM: A Spatio-Temporal Factor Model Based on Dual Vector Quantized Variational Autoencoders for Financial Trading [55.02735046724146]
In financial trading, factor models are widely used to price assets and capture excess returns from mispricing.
We propose a Spatio-Temporal factOR Model based on dual vector quantized variational autoencoders, named STORM.
Storm extracts features of stocks from temporal and spatial perspectives, then fuses and aligns these features at the fine-grained and semantic level, and represents the factors as multi-dimensional embeddings.
arXiv Detail & Related papers (2024-12-12T17:15:49Z) - Advanced Risk Prediction and Stability Assessment of Banks Using Time Series Transformer Models [10.79035001851989]
This paper proposes a prediction framework based on the Time Series Transformer model.
We compare the model with LSTM, GRU, CNN, TCN and RNN-Transformer models.
The experimental results show that the Time Series Transformer model outperforms other models in both mean square error (MSE) and mean absolute error (MAE) evaluation indicators.
arXiv Detail & Related papers (2024-12-04T08:15:27Z) - Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformation [53.88562288388169]
A common strategy for.
Efficient Fine-Tuning (PEFT) of pre-trained Vision Transformers (ViTs) involves adapting the model to downstream tasks.
We propose a novel PEFT approach inspired by Singular Value Decomposition (SVD) for representing the adaptation matrix.
SVD decomposes a matrix into the product of a left unitary matrix, a diagonal matrix of scaling values, and a right unitary matrix.
arXiv Detail & Related papers (2024-10-30T12:08:30Z) - Enhancing Actuarial Non-Life Pricing Models via Transformers [0.0]
We build on the foundation laid out by the combined actuarial neural network as well as the localGLMnet and enhance those models via the feature tokenizer transformer.
The paper shows that the new methods can achieve better results than the benchmark models while preserving certain generalized linear model advantages.
arXiv Detail & Related papers (2023-11-10T12:06:23Z) - Differential Evolution Algorithm based Hyper-Parameters Selection of
Transformer Neural Network Model for Load Forecasting [0.0]
Transformer models have the potential to improve Load forecasting because of their ability to learn long-range dependencies derived from their Attention Mechanism.
Our work compares the proposed Transformer based Neural Network model integrated with different metaheuristic algorithms by their performance in Load forecasting based on numerical metrics such as Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE)
arXiv Detail & Related papers (2023-07-28T04:29:53Z) - Emergent Agentic Transformer from Chain of Hindsight Experience [96.56164427726203]
We show that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
This is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
arXiv Detail & Related papers (2023-05-26T00:43:02Z) - Full Stack Optimization of Transformer Inference: a Survey [58.55475772110702]
Transformer models achieve superior accuracy across a wide range of applications.
The amount of compute and bandwidth required for inference of recent Transformer models is growing at a significant rate.
There has been an increased focus on making Transformer models more efficient.
arXiv Detail & Related papers (2023-02-27T18:18:13Z) - Forecasting High-Dimensional Covariance Matrices of Asset Returns with
Hybrid GARCH-LSTMs [0.0]
This paper investigates the ability of hybrid models, mixing GARCH processes and neural networks, to forecast covariance matrices of asset returns.
The new model proposed is very promising as it not only outperforms the equally weighted portfolio, but also by a significant margin its econometric counterpart.
arXiv Detail & Related papers (2021-08-25T23:41:43Z) - Decision Transformer: Reinforcement Learning via Sequence Modeling [102.86873656751489]
We present a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem.
We present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling.
Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
arXiv Detail & Related papers (2021-06-02T17:53:39Z) - Bayesian Transformer Language Models for Speech Recognition [59.235405107295655]
State-of-the-art neural language models (LMs) represented by Transformers are highly complex.
This paper proposes a full Bayesian learning framework for Transformer LM estimation.
arXiv Detail & Related papers (2021-02-09T10:55:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.