Bilinear Input Normalization for Neural Networks in Financial
Forecasting
- URL: http://arxiv.org/abs/2109.00983v1
- Date: Wed, 1 Sep 2021 07:52:03 GMT
- Title: Bilinear Input Normalization for Neural Networks in Financial
Forecasting
- Authors: Dat Thanh Tran, Juho Kanniainen, Moncef Gabbouj, Alexandros Iosifidis
- Abstract summary: We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
- Score: 101.89872650510074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data normalization is one of the most important preprocessing steps when
building a machine learning model, especially when the model of interest is a
deep neural network. This is because deep neural network optimized with
stochastic gradient descent is sensitive to the input variable range and prone
to numerical issues. Different than other types of signals, financial
time-series often exhibit unique characteristics such as high volatility,
non-stationarity and multi-modality that make them challenging to work with,
often requiring expert domain knowledge for devising a suitable processing
pipeline. In this paper, we propose a novel data-driven normalization method
for deep neural networks that handle high-frequency financial time-series. The
proposed normalization scheme, which takes into account the bimodal
characteristic of financial multivariate time-series, requires no expert
knowledge to preprocess a financial time-series since this step is formulated
as part of the end-to-end optimization process. Our experiments, conducted with
state-of-the-arts neural networks and high-frequency data from two large-scale
limit order books coming from the Nordic and US markets, show significant
improvements over other normalization techniques in forecasting future stock
price dynamics.
Related papers
- Temporal Convolution Derived Multi-Layered Reservoir Computing [5.261277318790788]
We propose a new mapping of input data into the reservoir's state space.
We incorporate this method in two novel network architectures increasing parallelizability, depth and predictive capabilities of the neural network.
For the chaotic time series, we observe an error reduction of up to $85.45%$ compared to Echo State Networks and $90.72%$ compared to Gated Recurrent Units.
arXiv Detail & Related papers (2024-07-09T11:40:46Z) - Extended Deep Adaptive Input Normalization for Preprocessing Time Series
Data for Neural Networks [0.0]
We propose the EDAIN layer, a novel adaptive neural layer that learns how to appropriately normalize irregular time series data for a given task in an end-to-end fashion.
Our experiments, conducted using synthetic data, a credit default prediction dataset, and a large-scale limit order book benchmark dataset, demonstrate the superior performance of the EDAIN layer.
arXiv Detail & Related papers (2023-10-23T08:56:01Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Multi-head Temporal Attention-Augmented Bilinear Network for Financial
time series prediction [77.57991021445959]
We propose a neural layer based on the ideas of temporal attention and multi-head attention to extend the capability of the underlying neural network.
The effectiveness of our approach is validated using large-scale limit-order book market data.
arXiv Detail & Related papers (2022-01-14T14:02:19Z) - Multivariate Anomaly Detection based on Prediction Intervals Constructed
using Deep Learning [0.0]
We benchmark our approach against the oft-preferred well-established statistical models.
We focus on three deep learning architectures, namely, cascaded neural networks, reservoir computing and long short-term memory recurrent neural networks.
arXiv Detail & Related papers (2021-10-07T12:34:31Z) - Low-Rank Temporal Attention-Augmented Bilinear Network for financial
time-series forecasting [93.73198973454944]
Deep learning models have led to significant performance improvements in many problems coming from different domains, including prediction problems of financial time-series data.
The Temporal Attention-Augmented Bilinear network was recently proposed as an efficient and high-performing model for Limit Order Book time-series forecasting.
In this paper, we propose a low-rank tensor approximation of the model to further reduce the number of trainable parameters and increase its speed.
arXiv Detail & Related papers (2021-07-05T10:15:23Z) - Improved Predictive Deep Temporal Neural Networks with Trend Filtering [22.352437268596674]
We propose a new prediction framework based on deep neural networks and a trend filtering.
We reveal that the predictive performance of deep temporal neural networks improves when the training data is temporally processed by a trend filtering.
arXiv Detail & Related papers (2020-10-16T08:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.