Pareto Driven Surrogate (ParDen-Sur) Assisted Optimisation of
Multi-period Portfolio Backtest Simulations
- URL: http://arxiv.org/abs/2209.13528v1
- Date: Tue, 13 Sep 2022 07:29:20 GMT
- Title: Pareto Driven Surrogate (ParDen-Sur) Assisted Optimisation of
Multi-period Portfolio Backtest Simulations
- Authors: Terence L. van Zyl and Matthew Woolway and Andrew Paskaramoorthy
- Abstract summary: This study presents the glsParDen-Sur modelling framework to efficiently perform the required hyper- parameter search.
glsParDen-Sur extends previous surrogate frameworks by including a reservoir sampling-based look-ahead mechanism for offspring generation in glsplEA alongside the traditional acceptance sampling scheme.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Portfolio management is a multi-period multi-objective optimisation problem
subject to a wide range of constraints. However, in practice, portfolio
management is treated as a single-period problem partly due to the
computationally burdensome hyper-parameter search procedure needed to construct
a multi-period Pareto frontier. This study presents the \gls{ParDen-Sur}
modelling framework to efficiently perform the required hyper-parameter search.
\gls{ParDen-Sur} extends previous surrogate frameworks by including a reservoir
sampling-based look-ahead mechanism for offspring generation in \glspl{EA}
alongside the traditional acceptance sampling scheme. We evaluate this
framework against, and in conjunction with, several seminal \gls{MO} \glspl{EA}
on two datasets for both the single- and multi-period use cases. Our results
show that \gls{ParDen-Sur} can speed up the exploration for optimal
hyper-parameters by almost $2\times$ with a statistically significant
improvement of the Pareto frontiers, across multiple \glspl{EA}, for both
datasets and use cases.
Related papers
- Efficient Pareto Manifold Learning with Low-Rank Structure [31.082432589391953]
Multi-task learning is inherently a multi-objective optimization problem.
We propose a novel approach that integrates a main network with several low-rank matrices.
It significantly reduces the number of parameters and facilitates the extraction of shared features.
arXiv Detail & Related papers (2024-07-30T11:09:27Z) - Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences [49.14535254003683]
PaLoRA is a novel parameter-efficient method that augments the original model with task-specific low-rank adapters.
Our experimental results show that PaLoRA outperforms MTL and PFL baselines across various datasets.
arXiv Detail & Related papers (2024-07-10T21:25:51Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - MultiZenoTravel: a Tunable Benchmark for Multi-Objective Planning with
Known Pareto Front [71.19090689055054]
Multi-objective AI planning suffers from a lack of benchmarks exhibiting known Pareto Fronts.
We propose a tunable benchmark generator, together with a dedicated solver that provably computes the true Pareto front of the resulting instances.
We show how to characterize the optimal plans for a constrained version of the problem, and then show how to reduce the general problem to the constrained one.
arXiv Detail & Related papers (2023-04-28T07:09:23Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Pareto Manifold Learning: Tackling multiple tasks via ensembles of
single-task models [50.33956216274694]
In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution.
We propose textitPareto Manifold Learning, an ensembling method in weight space.
arXiv Detail & Related papers (2022-10-18T11:20:54Z) - Efficiently Controlling Multiple Risks with Pareto Testing [34.83506056862348]
We propose a two-stage process which combines multi-objective optimization with multiple hypothesis testing.
We demonstrate the effectiveness of our approach to reliably accelerate the execution of large-scale Transformer models in natural language processing (NLP) applications.
arXiv Detail & Related papers (2022-10-14T15:54:39Z) - Consolidated learning -- a domain-specific model-free optimization
strategy with examples for XGBoost and MIMIC-IV [4.370097023410272]
This paper proposes a new formulation of the tuning problem, called consolidated learning.
In such settings, we are interested in the total optimization time rather than tuning for a single task.
We demonstrate the effectiveness of this approach through an empirical study for XGBoost algorithm and the collection of predictive tasks extracted from the MIMIC-IV medical database.
arXiv Detail & Related papers (2022-01-27T21:38:53Z) - Pareto Navigation Gradient Descent: a First-Order Algorithm for
Optimization in Pareto Set [17.617944390196286]
Modern machine learning applications, such as multi-task learning, require finding optimal model parameters to trade-off multiple objective functions.
We propose a first-order algorithm that approximately solves OPT-in-Pareto using only gradient information.
arXiv Detail & Related papers (2021-10-17T04:07:04Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - An Empirical Study of Assumptions in Bayesian Optimisation [61.19427472792523]
In this work we rigorously analyse conventional and non-conventional assumptions inherent to Bayesian optimisation.
We conclude that the majority of hyper- parameter tuning tasks exhibit heteroscedasticity and non-stationarity.
We hope these findings may serve as guiding principles, both for practitioners and for further research in the field.
arXiv Detail & Related papers (2020-12-07T16:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.