Kernel Minimum Divergence Portfolios
- URL: http://arxiv.org/abs/2110.09516v1
- Date: Fri, 15 Oct 2021 21:29:27 GMT
- Title: Kernel Minimum Divergence Portfolios
- Authors: Linda Chamakh and Zolt\'an Szab\'o
- Abstract summary: We propose to use kernel and optimal transport (KOT) based divergences to tackle the task.
We show that such analytic knowledge can lead to faster convergence of MMD estimators.
Numerical experiments demonstrate the improved performance of our KOT estimators.
- Score: 1.776746672434207
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Portfolio optimization is a key challenge in finance with the aim of creating
portfolios matching the investors' preference. The target distribution approach
relying on the Kullback-Leibler or the $f$-divergence represents one of the
most effective forms of achieving this goal. In this paper, we propose to use
kernel and optimal transport (KOT) based divergences to tackle the task, which
relax the assumptions and the optimization constraints of the previous
approaches. In case of the kernel-based maximum mean discrepancy (MMD) we (i)
prove the analytic computability of the underlying mean embedding for various
target distribution-kernel pairs, (ii) show that such analytic knowledge can
lead to faster convergence of MMD estimators, and (iii) extend the results to
the unbounded exponential kernel with minimax lower bounds. Numerical
experiments demonstrate the improved performance of our KOT estimators both on
synthetic and real-world examples.
Related papers
- Distributed Statistical Min-Max Learning in the Presence of Byzantine
Agents [34.46660729815201]
We consider a multi-agent min-max learning problem, and focus on the emerging challenge of contending with Byzantine adversarial agents.
Our main contribution is to provide a crisp analysis of the proposed robust extra-gradient algorithm for smooth convex-concave and smooth strongly convex-strongly concave functions.
Our rates are near-optimal, and reveal both the effect of adversarial corruption and the benefit of collaboration among the non-faulty agents.
arXiv Detail & Related papers (2022-04-07T03:36:28Z) - Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium
Learning from Offline Datasets [101.5329678997916]
We study episodic two-player zero-sum Markov games (MGs) in the offline setting.
The goal is to find an approximate Nash equilibrium (NE) policy pair based on a dataset collected a priori.
arXiv Detail & Related papers (2022-02-15T15:39:30Z) - Distribution Regression with Sliced Wasserstein Kernels [45.916342378789174]
We propose the first OT-based estimator for distribution regression.
We study the theoretical properties of a kernel ridge regression estimator based on such representation.
arXiv Detail & Related papers (2022-02-08T15:21:56Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Tight Mutual Information Estimation With Contrastive Fenchel-Legendre
Optimization [69.07420650261649]
We introduce a novel, simple, and powerful contrastive MI estimator named as FLO.
Empirically, our FLO estimator overcomes the limitations of its predecessors and learns more efficiently.
The utility of FLO is verified using an extensive set of benchmarks, which also reveals the trade-offs in practical MI estimation.
arXiv Detail & Related papers (2021-07-02T15:20:41Z) - On Centralized and Distributed Mirror Descent: Exponential Convergence
Analysis Using Quadratic Constraints [8.336315962271396]
Mirror descent (MD) is a powerful first-order optimization technique that subsumes several algorithms including gradient descent (GD)
We study the exact convergence rate of MD in both centralized and distributed cases for strongly convex and smooth problems.
arXiv Detail & Related papers (2021-05-29T23:05:56Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Finite Sample Analysis of Minimax Offline Reinforcement Learning:
Completeness, Fast Rates and First-Order Efficiency [83.02999769628593]
We offer a theoretical characterization of off-policy evaluation (OPE) in reinforcement learning.
We show that the minimax approach enables us to achieve a fast rate of convergence for weights and quality functions.
We present the first finite-sample result with first-order efficiency in non-tabular environments.
arXiv Detail & Related papers (2021-02-05T03:20:39Z) - Information-Theoretic Multi-Objective Bayesian Optimization with
Continuous Approximations [44.25245545568633]
We propose information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA) to solve this problem.
Our experiments on diverse synthetic and real-world benchmarks show that iMOCA significantly improves over existing single-fidelity methods.
arXiv Detail & Related papers (2020-09-12T01:46:03Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Adopting Robustness and Optimality in Fitting and Learning [4.511425319032815]
We generalize a modified exponentialized estimator by pushing the robust-optimal (RO) index $lambda$ to $-infty digits.
The robustness is realized and controlled adaptively by RONIST.
arXiv Detail & Related papers (2015-10-13T19:14:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.