Best Principal Submatrix Selection for the Maximum Entropy Sampling
Problem: Scalable Algorithms and Performance Guarantees
- URL: http://arxiv.org/abs/2001.08537v3
- Date: Mon, 1 May 2023 13:21:13 GMT
- Title: Best Principal Submatrix Selection for the Maximum Entropy Sampling
Problem: Scalable Algorithms and Performance Guarantees
- Authors: Yongchun Li, Weijun Xie
- Abstract summary: MESP has been widely applied to many areas, including healthcare, power system, manufacturing and data science.
We derive a novel convex integer program for MESP and show that its continuous relaxation yields a near-optimal solution.
Our numerical experiments demonstrate that these approximation algorithms can efficiently solve medium-sized and large-scale instances to near-optimality.
- Score: 1.7640556247739623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies a classic maximum entropy sampling problem (MESP), which
aims to select the most informative principal submatrix of a prespecified size
from a covariance matrix. MESP has been widely applied to many areas, including
healthcare, power system, manufacturing and data science. By investigating its
Lagrangian dual and primal characterization, we derive a novel convex integer
program for MESP and show that its continuous relaxation yields a near-optimal
solution. The results motivate us to study an efficient sampling algorithm and
develop its approximation bound for MESP, which improves the best-known bound
in literature. We then provide an efficient deterministic implementation of the
sampling algorithm with the same approximation bound. By developing new
mathematical tools for the singular matrices and analyzing the Lagrangian dual
of the proposed convex integer program, we investigate the widely-used local
search algorithm and prove its first-known approximation bound for MESP. The
proof techniques further inspire us with an efficient implementation of the
local search algorithm. Our numerical experiments demonstrate that these
approximation algorithms can efficiently solve medium-sized and large-scale
instances to near-optimality. Our proposed algorithms are coded and released as
open-source software. Finally, we extend the analyses to the A-Optimal MESP
(A-MESP), where the objective is to minimize the trace of the inverse of the
selected principal submatrix.
Related papers
- Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Optimizing Objective Functions from Trained ReLU Neural Networks via
Sampling [0.0]
This paper introduces scalable, sampling-based algorithms that optimize trained neural networks with ReLU activations.
We first propose an iterative algorithm that takes advantage of the piecewise linear structure of ReLU neural networks.
We then extend this approach by searching around the neighborhood of the LP solution computed at each iteration.
arXiv Detail & Related papers (2022-05-27T18:35:48Z) - Variational Quantum Algorithms for Semidefinite Programming [3.481985817302898]
A semidefinite program (SDP) is a convex optimization problem with applications in operations research, optimization, quantum information science, and beyond.
We propose variational quantum algorithms for approximately solving SDPs.
arXiv Detail & Related papers (2021-12-16T13:10:48Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z) - Exact and Approximation Algorithms for Sparse PCA [1.7640556247739623]
This paper proposes two exact mixed-integer SDPs (MISDPs)
We then analyze the theoretical optimality gaps of their continuous relaxation values and prove that they are stronger than that of the state-of-art one.
Since off-the-shelf solvers, in general, have difficulty in solving MISDPs, we approximate SPCA with arbitrary accuracy by a mixed-integer linear program (MILP) of a similar size as MISDPs.
arXiv Detail & Related papers (2020-08-28T02:07:08Z) - A Riemannian Primal-dual Algorithm Based on Proximal Operator and its
Application in Metric Learning [3.511851311025242]
We propose a primal-dual algorithm to optimize the primal and dual variables iteratively.
We prove convergence of the proposed algorithm and show its non-asymptotic convergence rate.
Preliminary experimental results on an optimal fund selection problem in fund of funds management showed its efficacy.
arXiv Detail & Related papers (2020-05-19T03:31:01Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.