Accelerating Certifiable Estimation with Preconditioned Eigensolvers
- URL: http://arxiv.org/abs/2207.05257v1
- Date: Tue, 12 Jul 2022 02:00:08 GMT
- Title: Accelerating Certifiable Estimation with Preconditioned Eigensolvers
- Authors: David M. Rosen
- Abstract summary: A dominant cost in many state-of-the-art (Burer-Monteiro factorization-based) estimation methods is solution verification.
We show how to significantly accelerate this verification step, and thereby the overall speed of certifiable estimation methods.
We show how to address this challenge using preconditioned eigensolvers.
- Score: 2.1701691499017812
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Convex (specifically semidefinite) relaxation provides a powerful approach to
constructing robust machine perception systems, enabling the recovery of
certifiably globally optimal solutions of challenging estimation problems in
many practical settings. However, solving the large-scale semidefinite
relaxations underpinning this approach remains a formidable computational
challenge. A dominant cost in many state-of-the-art (Burer-Monteiro
factorization-based) certifiable estimation methods is solution verification
(testing the global optimality of a given candidate solution), which entails
computing a minimum eigenpair of a certain symmetric certificate matrix. In
this paper, we show how to significantly accelerate this verification step, and
thereby the overall speed of certifiable estimation methods. First, we show
that the certificate matrices arising in the Burer-Monteiro approach
generically possess spectra that make the verification problem expensive to
solve using standard iterative eigenvalue methods. We then show how to address
this challenge using preconditioned eigensolvers; specifically, we design a
specialized solution verification algorithm based upon the locally optimal
block preconditioned conjugate gradient (LOBPCG) method together with a simple
yet highly effective algebraic preconditioner. Experimental evaluation on a
variety of simulated and real-world examples shows that our proposed
verification scheme is very effective in practice, accelerating solution
verification by up to 280x, and the overall Burer-Monteiro method by up to 16x,
versus the standard Lanczos method when applied to relaxations derived from
large-scale SLAM benchmarks.
Related papers
- PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates [17.777466668123886]
We introduce PROMISE ($textbfPr$econditioned $textbfO$ptimization $textbfM$ethods by $textbfI$ncorporating $textbfS$calable Curvature $textbfE$stimates), a suite of sketching-based preconditioned gradient algorithms.
PROMISE includes preconditioned versions of SVRG, SAGA, and Katyusha.
arXiv Detail & Related papers (2023-09-05T07:49:10Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - No-Regret Constrained Bayesian Optimization of Noisy and Expensive
Hybrid Models using Differentiable Quantile Function Approximations [0.0]
Constrained Upper Quantile Bound (CUQB) is a conceptually simple, deterministic approach that avoids constraint approximations.
We show that CUQB significantly outperforms traditional Bayesian optimization in both constrained and unconstrained cases.
arXiv Detail & Related papers (2023-05-05T19:57:36Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - A Simple and Scalable Tensor Completion Algorithm via Latent Invariant
Constraint for Recommendation System [6.125831939376033]
We study a novel method to efficiently and accurately learn parameters of a model for the unobservable personal preferences that underly user ratings.
By regularizing the tensor decomposition with a single latent invariant, we achieve three properties for a reliable recommender system.
arXiv Detail & Related papers (2022-06-27T15:03:18Z) - Certified Symmetry and Dominance Breaking for Combinatorial Optimisation [13.333957453318744]
We develop a certification method for optimisation problems in which symmetry and dominance breaking are easily expressible.
We apply our method to maximum clique solving and constraint as a proof of concept that the approach applies to a wider range of problems.
arXiv Detail & Related papers (2022-03-23T08:45:35Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Density Fixing: Simple yet Effective Regularization Method based on the
Class Prior [2.3859169601259347]
We propose a framework of regularization methods, called density-fixing, that can be used commonly for supervised and semi-supervised learning.
Our proposed regularization method improves the generalization performance by forcing the model to approximate the class's prior distribution or the frequency of occurrence.
arXiv Detail & Related papers (2020-07-08T04:58:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.