Applying Dynamic Training-Subset Selection Methods Using Genetic
Programming for Forecasting Implied Volatility
- URL: http://arxiv.org/abs/2007.07207v1
- Date: Mon, 29 Jun 2020 21:28:30 GMT
- Title: Applying Dynamic Training-Subset Selection Methods Using Genetic
Programming for Forecasting Implied Volatility
- Authors: Sana Ben Hamida and Wafa Abdelmalek and Fathi Abid
- Abstract summary: This paper aims to improve the accuracy of forecasting implied volatility using an extension of genetic programming (GP)
Four dynamic training-subset selection methods are proposed based on random, sequential or adaptive subset selection.
Results show that the dynamic approach improves the forecasting performance of the generated GP models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Volatility is a key variable in option pricing, trading and hedging
strategies. The purpose of this paper is to improve the accuracy of forecasting
implied volatility using an extension of genetic programming (GP) by means of
dynamic training-subset selection methods. These methods manipulate the
training data in order to improve the out of sample patterns fitting. When
applied with the static subset selection method using a single training data
sample, GP could generate forecasting models which are not adapted to some out
of sample fitness cases. In order to improve the predictive accuracy of
generated GP patterns, dynamic subset selection methods are introduced to the
GP algorithm allowing a regular change of the training sample during evolution.
Four dynamic training-subset selection methods are proposed based on random,
sequential or adaptive subset selection. The latest approach uses an adaptive
subset weight measuring the sample difficulty according to the fitness cases
errors. Using real data from SP500 index options, these techniques are compared
to the static subset selection method. Based on MSE total and percentage of non
fitted observations, results show that the dynamic approach improves the
forecasting performance of the generated GP models, specially those obtained
from the adaptive random training subset selection method applied to the whole
set of training samples.
Related papers
- Multi-model Ensemble Conformal Prediction in Dynamic Environments [14.188004615463742]
We introduce a novel adaptive conformal prediction framework, where the model used for creating prediction sets is selected on the fly from multiple candidate models.
The proposed algorithm is proven to achieve strongly adaptive regret over all intervals while maintaining valid coverage.
arXiv Detail & Related papers (2024-11-06T05:57:28Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Thompson sampling for improved exploration in GFlowNets [75.89693358516944]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy.
We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
arXiv Detail & Related papers (2023-06-30T14:19:44Z) - A Robust Classifier Under Missing-Not-At-Random Sample Selection Bias [15.628927478079913]
In statistics, Greene's method formulates this type of sample selection with logistic regression as the prediction model.
We propose BiasCorr, an algorithm that improves on Greene's method by modifying the original training set.
We provide theoretical guarantee for the improvement of BiasCorr over Greene's method by analyzing its bias.
arXiv Detail & Related papers (2023-05-25T01:39:51Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Adaptive Selection of the Optimal Strategy to Improve Precision and
Power in Randomized Trials [2.048226951354646]
We show how to select the adjustment approach -- which variables and in which form -- to maximize precision.
Our approach maintains Type-I error control (under the null) and offers substantial gains in precision.
When applied to real data, we also see meaningful efficiency improvements overall and within subgroups.
arXiv Detail & Related papers (2022-10-31T16:25:38Z) - Predictive machine learning for prescriptive applications: a coupled
training-validating approach [77.34726150561087]
We propose a new method for training predictive machine learning models for prescriptive applications.
This approach is based on tweaking the validation step in the standard training-validating-testing scheme.
Several experiments with synthetic data demonstrate promising results in reducing the prescription costs in both deterministic and real models.
arXiv Detail & Related papers (2021-10-22T15:03:20Z) - A Scalable MIP-based Method for Learning Optimal Multivariate Decision
Trees [17.152864798265455]
We propose a novel MIP formulation, based on a 1-norm support vector machine model, to train a multivariate ODT for classification problems.
We provide cutting plane techniques that tighten the linear relaxation of the MIP formulation, in order to improve run times to reach optimality.
We demonstrate that our formulation outperforms its counterparts in the literature by an average of about 10% in terms of mean out-of-sample testing accuracy.
arXiv Detail & Related papers (2020-11-06T14:17:41Z) - Evolutionary Selective Imitation: Interpretable Agents by Imitation
Learning Without a Demonstrator [1.370633147306388]
We propose a new method for training an agent via an evolutionary strategy (ES)
In every iteration we replace a subset of the samples with samples from the best trajectories discovered so far.
The evaluation procedure for this set is to train, via supervised learning, a randomly initialised neural network (NN) to imitate the set.
arXiv Detail & Related papers (2020-09-17T16:25:31Z) - Dynamic Scale Training for Object Detection [111.33112051962514]
We propose a Dynamic Scale Training paradigm (abbreviated as DST) to mitigate scale variation challenge in object detection.
Experimental results demonstrate the efficacy of our proposed DST towards scale variation handling.
It does not introduce inference overhead and could serve as a free lunch for general detection configurations.
arXiv Detail & Related papers (2020-04-26T16:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.