Adaptive Learning on Time Series: Method and Financial Applications
- URL: http://arxiv.org/abs/2110.11156v1
- Date: Thu, 21 Oct 2021 13:59:54 GMT
- Title: Adaptive Learning on Time Series: Method and Financial Applications
- Authors: Parley Ruogu Yang, Ryan Lucas, Camilla Schelpe
- Abstract summary: We use Adaptive Learning to forecast S&P 500 returns across multiple forecast horizons.
We find that Adaptive Learning models are on par with, if not better than, the best of the parametric models a posteriori.
We present a financial application of the learning results and an interpretation of the learning regime during the 2020 market crash.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We formally introduce a time series statistical learning method, called
Adaptive Learning, capable of handling model selection, out-of-sample
forecasting and interpretation in a noisy environment. Through simulation
studies we demonstrate that the method can outperform traditional model
selection techniques such as AIC and BIC in the presence of regime-switching,
as well as facilitating window size determination when the Data Generating
Process is time-varying. Empirically, we use the method to forecast S&P 500
returns across multiple forecast horizons, employing information from the VIX
Curve and the Yield Curve. We find that Adaptive Learning models are generally
on par with, if not better than, the best of the parametric models a
posteriori, evaluated in terms of MSE, while also outperforming under cross
validation. We present a financial application of the learning results and an
interpretation of the learning regime during the 2020 market crash. These
studies can be extended in both a statistical direction and in terms of
financial applications.
Related papers
- Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment [65.15914284008973]
We propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model.
We show that the proposed algorithms converge to the stationary solutions of the IRL problem.
Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process.
arXiv Detail & Related papers (2024-05-28T07:11:05Z) - An Emulator for Fine-Tuning Large Language Models using Small Language
Models [91.02498576056057]
We introduce emulated fine-tuning (EFT), a principled and practical method for sampling from a distribution that approximates the result of pre-training and fine-tuning at different scales.
We show that EFT enables test-time adjustment of competing behavioral traits like helpfulness and harmlessness without additional training.
Finally, a special case of emulated fine-tuning, which we call LM up-scaling, avoids resource-intensive fine-tuning of large pre-trained models by ensembling them with small fine-tuned models.
arXiv Detail & Related papers (2023-10-19T17:57:16Z) - Towards a Prediction of Machine Learning Training Time to Support
Continuous Learning Systems Development [5.207307163958806]
We present an empirical study of the Full.
Time Complexity (FPTC) approach by Zheng et al.
We study the formulations proposed for the Logistic Regression and Random Forest classifiers.
We observe how, from the conducted study, the prediction of training time is strictly related to the context.
arXiv Detail & Related papers (2023-09-20T11:35:03Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - fETSmcs: Feature-based ETS model component selection [8.99236558175168]
We propose an efficient approach for ETS model selection by training classifiers on simulated data to predict appropriate model component forms for a given time series.
We evaluate our approach on the widely used forecasting competition data set M4 in terms of both point forecasts and prediction intervals.
arXiv Detail & Related papers (2022-06-26T13:52:43Z) - Adaptive learning for financial markets mixing model-based and
model-free RL for volatility targeting [0.0]
Model-Free Reinforcement Learning has achieved meaningful results in stable environments but, to this day, it remains problematic in regime changing environments like financial markets.
We propose to combine the best of the two techniques by selecting various model-based approaches thanks to Model-Free Deep Reinforcement Learning.
arXiv Detail & Related papers (2021-04-19T19:20:22Z) - Interpretable ML-driven Strategy for Automated Trading Pattern
Extraction [2.7910505923792646]
We propose a volume-based data pre-processing method for financial time series analysis.
We use a statistical approach for assessing the performance of the method.
Our analysis shows that the proposed method allows successful classification of the financial time series patterns.
arXiv Detail & Related papers (2021-03-23T09:55:46Z) - On the Impact of Applying Machine Learning in the Decision-Making of
Self-Adaptive Systems [17.93069260609691]
We use computational learning theory to determine a theoretical bound on the impact of the machine learning method on the predictions made by the verifier.
To conclude, we look at opportunities for future research in this area.
arXiv Detail & Related papers (2021-03-18T11:59:50Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.