Expert Aggregation for Financial Forecasting
- URL: http://arxiv.org/abs/2111.15365v4
- Date: Thu, 6 Jul 2023 09:38:40 GMT
- Title: Expert Aggregation for Financial Forecasting
- Authors: Carl Remlinger, Bri\`ere Marie, Alasseur Cl\'emence, Joseph Mikael
- Abstract summary: Online aggregation of experts combine the forecasts of a finite set of models in a single approach.
Online mixture of experts leads to attractive portfolio performances even in environments characterised by non-stationarity.
Extensions to expert and aggregation specialisations are also proposed to improve the overall mixture on a family of portfolio evaluation metrics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning algorithms dedicated to financial time series forecasting
have gained a lot of interest. But choosing between several algorithms can be
challenging, as their estimation accuracy may be unstable over time. Online
aggregation of experts combine the forecasts of a finite set of models in a
single approach without making any assumption about the models. In this paper,
a Bernstein Online Aggregation (BOA) procedure is applied to the construction
of long-short strategies built from individual stock return forecasts coming
from different machine learning models. The online mixture of experts leads to
attractive portfolio performances even in environments characterised by
non-stationarity. The aggregation outperforms individual algorithms, offering a
higher portfolio Sharpe Ratio, lower shortfall, with a similar turnover.
Extensions to expert and aggregation specialisations are also proposed to
improve the overall mixture on a family of portfolio evaluation metrics.
Related papers
- Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection [63.96018203905272]
We propose to reduce the sampling cost by pruning a pretrained diffusion model into a mixture of efficient experts.
We demonstrate the effectiveness of our method, DiffPruning, across several datasets.
arXiv Detail & Related papers (2024-09-23T21:27:26Z) - MGCP: A Multi-Grained Correlation based Prediction Network for Multivariate Time Series [54.91026286579748]
We propose a Multi-Grained Correlations-based Prediction Network.
It simultaneously considers correlations at three levels to enhance prediction performance.
It employs adversarial training with an attention mechanism-based predictor and conditional discriminator to optimize prediction results at coarse-grained level.
arXiv Detail & Related papers (2024-05-30T03:32:44Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Online Ensemble of Models for Optimal Predictive Performance with
Applications to Sector Rotation Strategy [0.0]
Asset-specific factors are commonly used to forecast financial returns and quantify asset-specific risk premia.
We develop an online ensemble algorithm that learns to optimize predictive performance.
By utilizing monthly predictions from our ensemble, we develop a sector rotation strategy that significantly outperforms the market.
arXiv Detail & Related papers (2023-03-30T02:25:54Z) - MECATS: Mixture-of-Experts for Quantile Forecasts of Aggregated Time
Series [11.826510794042548]
We introduce a mixture of heterogeneous experts framework called textttMECATS.
It simultaneously forecasts the values of a set of time series that are related through an aggregation hierarchy.
Different types of forecasting models can be employed as individual experts so that the form of each model can be tailored to the nature of the corresponding time series.
arXiv Detail & Related papers (2021-12-22T05:05:30Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - Ensembles of Randomized NNs for Pattern-based Time Series Forecasting [0.0]
We propose an ensemble forecasting approach based on randomized neural networks.
A pattern-based representation of time series makes the proposed approach suitable for forecasting time series with multiple seasonality.
Case studies conducted on four real-world forecasting problems verified the effectiveness and superior performance of the proposed ensemble forecasting approach.
arXiv Detail & Related papers (2021-07-08T20:13:50Z) - MegazordNet: combining statistical and machine learning standpoints for
time series forecasting [0.4061135251278187]
MegazordNet is a framework that explores statistical features within a financial series combined with a structured deep learning model for time series forecasting.
We evaluate our approach predicting the closing price of stocks in the S&P 500 using different metrics, and we were able to beat single statistical and machine learning methods.
arXiv Detail & Related papers (2021-06-23T15:06:54Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Gaussian Experts Selection using Graphical Models [7.530615321587948]
Local approximations reduce time complexity by dividing the original dataset into subsets and training a local expert on each subset.
We leverage techniques from the literature on undirected graphical models, using sparse precision matrices that encode conditional dependencies between experts to select the most important experts.
arXiv Detail & Related papers (2021-02-02T14:12:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.