LoCoV: low dimension covariance voting algorithm for portfolio
optimization
- URL: http://arxiv.org/abs/2204.00204v1
- Date: Fri, 1 Apr 2022 04:42:56 GMT
- Title: LoCoV: low dimension covariance voting algorithm for portfolio
optimization
- Authors: JunTao Duan, Ionel Popescu
- Abstract summary: We analyze the random matrix aspects of portfolio optimization and identify the order of errors in sample optimal portfolio weight.
We also provide LoCoV (low dimension covariance voting) algorithm to reduce error inherited from random samples.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Minimum-variance portfolio optimizations rely on accurate covariance
estimator to obtain optimal portfolios. However, it usually suffers from large
error from sample covariance matrix when the sample size $n$ is not
significantly larger than the number of assets $p$. We analyze the random
matrix aspects of portfolio optimization and identify the order of errors in
sample optimal portfolio weight and show portfolio risk are underestimated when
using samples. We also provide LoCoV (low dimension covariance voting)
algorithm to reduce error inherited from random samples. From various
experiments, LoCoV is shown to outperform the classical method by a large
margin.
Related papers
- Neural Nonlinear Shrinkage of Covariance Matrices for Minimum Variance Portfolio Optimization [1.2001699611848735]
It is a hybrid approach that integrates statistical estimation with machine learning.<n> Empirical results on stock daily returns from Standard & Poor's 500 Index (S&P500) demonstrate that the proposed method consistently achieves lower out-of-sample realized risk.
arXiv Detail & Related papers (2026-01-22T02:44:33Z) - Variable selection for minimum-variance portfolios [0.0]
We parameterize minimum-variance portfolio weights as a function of a large pool of firm-level characteristics.<n>We find that the gains from employing ML to select relevant predictors are substantial.<n>Some of the selected predictors that help decreasing portfolio risk also increase returns.
arXiv Detail & Related papers (2025-08-20T18:14:39Z) - End-to-End Large Portfolio Optimization for Variance Minimization with Neural Networks through Covariance Cleaning [0.0]
We develop a rotation-invariant neural network that provides the global minimum-variance portfolio.<n>This explicit mathematical mapping offers clear interpretability of each module's role.<n>A single model can be calibrated on panels of a few hundred stocks and applied, without retraining, to one thousand US equities.
arXiv Detail & Related papers (2025-07-02T17:27:29Z) - Contextual Learning for Stochastic Optimization [1.0819408603463425]
Motivated by optimization, we introduce the problem of learning from samples of contextual value distributions.<n>A contextual value distribution can be understood as a family of real-valued distributions, where each sample consists of a context $x$ and a random variable drawn from the corresponding real-valued distribution $D_x$.
arXiv Detail & Related papers (2025-05-22T16:01:49Z) - Nearly Optimal Sample Complexity for Learning with Label Proportions [54.67830198790247]
We investigate Learning from Label Proportions (LLP), a partial information setting where examples in a training set are grouped into bags.<n>Despite the partial observability, the goal is still to achieve small regret at the level of individual examples.<n>We give results on the sample complexity of LLP under square loss, showing that our sample complexity is essentially optimal.
arXiv Detail & Related papers (2025-05-08T15:45:23Z) - Leveraging Sparsity for Sample-Efficient Preference Learning: A Theoretical Perspective [16.610925506252716]
This paper considers the sample-efficiency of preference learning, which models and predicts human choices based on comparative judgments.
Under the sparse random utility model, where the parameter of the reward function is $k$-sparse, the minimax optimal rate can be reduced to $Theta(k/n log(d/k))$.
arXiv Detail & Related papers (2025-01-30T11:41:13Z) - Precise Asymptotics of Bagging Regularized M-estimators [5.165142221427928]
We characterize the squared prediction risk of ensemble estimators obtained through subagging (subsample bootstrap aggregating) regularized M-estimators.
Key to our analysis is a new result on the joint behavior of correlations between the estimator and residual errors on overlapping subsamples.
Joint optimization of subsample size, ensemble size, and regularization can significantly outperform regularizer optimization alone on the full data.
arXiv Detail & Related papers (2024-09-23T17:48:28Z) - Portfolio Optimization with Robust Covariance and Conditional Value-at-Risk Constraints [0.0]
We evaluated the performance of large-cap portfolio using various forms of Ledoit Shrinkage Covariance and Robust Gerber Covariance matrix.
robustness estimators can outperform the market capitalization-weighted benchmark portfolio, particularly during bull markets.
We incorporated unsupervised clustering algorithm K-means to the optimization algorithm.
arXiv Detail & Related papers (2024-06-02T03:50:20Z) - Forecasting Large Realized Covariance Matrices: The Benefits of Factor
Models and Shrinkage [1.0323063834827415]
We decompose the return covariance matrix using standard firm-level factors and use sectoral restrictions in the residual covariance matrix.
Our methodology improves forecasting precision relative to standard benchmarks and leads to better estimates of minimum variance portfolios.
arXiv Detail & Related papers (2023-03-22T16:38:22Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Vector Optimization with Stochastic Bandit Feedback [10.66048003460524]
We introduce vector optimization problems with geometric bandit feedback.
We consider $K$ designs, with multi-dimensional mean reward vectors, which are partially ordered according to a polyhedral ordering cone $C$.
arXiv Detail & Related papers (2021-10-23T22:38:54Z) - Optimal Off-Policy Evaluation from Multiple Logging Policies [77.62012545592233]
We study off-policy evaluation from multiple logging policies, each generating a dataset of fixed size, i.e., stratified sampling.
We find the OPE estimator for multiple loggers with minimum variance for any instance, i.e., the efficient one.
arXiv Detail & Related papers (2020-10-21T13:43:48Z) - Machine Learning's Dropout Training is Distributionally Robust Optimal [10.937094979510212]
This paper shows that dropout training in Generalized Linear Models provides out-of-sample expected loss guarantees.
It also provides a novel, parallelizable, Unbiased Multi-Level Monte Carlo algorithm to speed-up the implementation of dropout training.
arXiv Detail & Related papers (2020-09-13T23:13:28Z) - FANOK: Knockoffs in Linear Time [73.5154025911318]
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
We test our methods on problems with $p$ as large as $500,000$.
arXiv Detail & Related papers (2020-06-15T21:55:34Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - Revisiting SGD with Increasingly Weighted Averaging: Optimization and
Generalization Perspectives [50.12802772165797]
The averaging technique combines all iterative solutions into a single solution.
Experiments have demonstrated trade-off and the effectiveness of averaging compared with other averaging schemes.
arXiv Detail & Related papers (2020-03-09T18:14:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.