PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python
- URL: http://arxiv.org/abs/2201.04040v2
- Date: Mon, 17 Jun 2024 16:40:20 GMT
- Title: PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python
- Authors: Baptiste Goujaud, Céline Moucer, François Glineur, Julien Hendrickx, Adrien Taylor, Aymeric Dieuleveut,
- Abstract summary: PEPit is a Python package enabling computer-assisted worst-case analyses of first-order optimization methods.
The package takes care of the SDP modeling parts, and the worst-case analysis is performed numerically via a standard solver.
- Score: 8.603842959054939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: PEPit is a Python package aiming at simplifying the access to worst-case analyses of a large family of first-order optimization methods possibly involving gradient, projection, proximal, or linear optimization oracles, along with their approximate, or Bregman variants. In short, PEPit is a package enabling computer-assisted worst-case analyses of first-order optimization methods. The key underlying idea is to cast the problem of performing a worst-case analysis, often referred to as a performance estimation problem (PEP), as a semidefinite program (SDP) which can be solved numerically. To do that, the package users are only required to write first-order methods nearly as they would have implemented them. The package then takes care of the SDP modeling parts, and the worst-case analysis is performed numerically via a standard solver.
Related papers
- A practical, fast method for solving sum-of-squares problems for very large polynomials [10.645318208507213]
Sum of squares (SOS) optimization is a powerful technique for solving problems where the positivity of as must be enforced.
Our goal is to devise an approach that can handle larger, more complex problems than is currently possible.
arXiv Detail & Related papers (2024-10-21T12:47:42Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Pessimistic Off-Policy Multi-Objective Optimization [22.525654101072252]
We study offline optimization of multi-objective policies from data collected by an existing policy.
We propose a pessimistic estimator for the multi-objective policy values that can be easily plugged into existing formulas for hypervolume computation and optimized.
arXiv Detail & Related papers (2023-10-28T06:50:15Z) - Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference [12.91637880428221]
This paper introduces a plug-and-play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution.
It divides the challenging task of posterior sampling into two simpler sampling problems.
Its performance is compared to recent state-of-the-art optimization and sampling methods.
arXiv Detail & Related papers (2023-04-21T17:17:51Z) - PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for
Linear and Integer Programming [9.764407462807588]
We present the PyEPO package, a PyTorchbased end-to-end predict-then-optimize library in Python.
PyEPO is the first such generic tool for linear and integer programming with predicted objective function coefficients.
arXiv Detail & Related papers (2022-06-28T18:33:55Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - COPS: Controlled Pruning Before Training Starts [68.8204255655161]
State-of-the-art deep neural network (DNN) pruning techniques, applied one-shot before training starts, evaluate sparse architectures with the help of a single criterion -- called pruning score.
In this work we do not concentrate on a single pruning criterion, but provide a framework for combining arbitrary GSSs to create more powerful pruning strategies.
arXiv Detail & Related papers (2021-07-27T08:48:01Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.