Learning low-frequency temporal patterns for quantitative trading
- URL: http://arxiv.org/abs/2008.09481v1
- Date: Wed, 12 Aug 2020 11:59:15 GMT
- Title: Learning low-frequency temporal patterns for quantitative trading
- Authors: Joel da Costa, Tim Gebbie
- Abstract summary: We consider the viability of a modularised online machine learning framework to learn signals in low-frequency financial time series data.
The framework is proved on daily sampled closing time-series data from JSE equity markets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the viability of a modularised mechanistic online machine
learning framework to learn signals in low-frequency financial time series
data. The framework is proved on daily sampled closing time-series data from
JSE equity markets. The input patterns are vectors of pre-processed sequences
of daily, weekly and monthly or quarterly sampled feature changes. The data
processing is split into a batch processed step where features are learnt using
a stacked autoencoder via unsupervised learning, and then both batch and online
supervised learning are carried out using these learnt features, with the
output being a point prediction of measured time-series feature fluctuations.
Weight initializations are implemented with restricted Boltzmann machine
pre-training, and variance based initializations. Historical simulations are
then run using an online feedforward neural network initialised with the
weights from the batch training and validation step. The validity of results
are considered under a rigorous assessment of backtest overfitting using both
combinatorially symmetrical cross validation and probabilistic and deflated
Sharpe ratios. Results are used to develop a view on the phenomenology of
financial markets and the value of complex historical data-analysis for trading
under the unstable adaptive dynamics that characterise financial markets.
Related papers
- Scale-Invariant Learning-to-Rank [0.0]
At Expedia, learning-to-rank models play a key role in sorting and presenting information more relevant to users.
A major challenge in deploying these models is ensuring consistent feature scaling between training and production data.
We introduce a scale-invariant LTR framework which combines a deep and a wide neural network to mathematically guarantee scale-invariance in the model at both training and prediction time.
We evaluate our framework in simulated real-world scenarios with injected feature scale issues by perturbing the test set at prediction time, and show that even with inconsistent train-test scaling, using framework achieves better performance than
arXiv Detail & Related papers (2024-10-02T19:05:12Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Feature Selection with Annealing for Forecasting Financial Time Series [2.44755919161855]
This study provides a comprehensive method for forecasting financial time series based on tactical input output feature mapping techniques using machine learning (ML) models.
Experiments indicate that the FSA algorithm increased the performance of ML models, regardless of problem type.
arXiv Detail & Related papers (2023-03-03T21:33:38Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Data Feedback Loops: Model-driven Amplification of Dataset Biases [9.773315369593876]
We formalize a system where interactions with one model are recorded as history and scraped as training data in the future.
We analyze its stability over time by tracking changes to a test-time bias statistic.
We find that the degree of bias amplification is closely linked to whether the model's outputs behave like samples from the training distribution.
arXiv Detail & Related papers (2022-09-08T17:35:51Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Learning Non-Stationary Time-Series with Dynamic Pattern Extractions [16.19692047595777]
State-of-the-art algorithms have achieved a decent performance in dealing with stationary temporal data.
Traditional algorithms that tackle stationary time-series do not apply to non-stationary series like Forex trading.
This paper investigates applicable models that can improve the accuracy of forecasting future trends of non-stationary time-series sequences.
arXiv Detail & Related papers (2021-11-20T10:52:37Z) - Bilinear Input Normalization for Neural Networks in Financial
Forecasting [101.89872650510074]
We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
arXiv Detail & Related papers (2021-09-01T07:52:03Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.