Implicit differentiation for fast hyperparameter selection in non-smooth
convex learning
- URL: http://arxiv.org/abs/2105.01637v1
- Date: Tue, 4 May 2021 17:31:28 GMT
- Title: Implicit differentiation for fast hyperparameter selection in non-smooth
convex learning
- Authors: Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu
Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
- Abstract summary: We study first-order methods when the inner optimization problem is convex but non-smooth.
We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian.
- Score: 87.60600646105696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding the optimal hyperparameters of a model can be cast as a bilevel
optimization problem, typically solved using zero-order techniques. In this
work we study first-order methods when the inner optimization problem is convex
but non-smooth. We show that the forward-mode differentiation of proximal
gradient descent and proximal coordinate descent yield sequences of Jacobians
converging toward the exact Jacobian. Using implicit differentiation, we show
it is possible to leverage the non-smoothness of the inner problem to speed up
the computation. Finally, we provide a bound on the error made on the
hypergradient when the inner optimization problem is solved approximately.
Results on regression and classification problems reveal computational benefits
for hyperparameter optimization, especially when multiple hyperparameters are
required.
Related papers
- Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - A simple uniformly optimal method without line search for convex optimization [9.280355951055865]
We show that line search is superfluous in attaining the optimal rate of convergence for solving a convex optimization problem whose parameters are not given a priori.
We present a novel accelerated gradient descent type algorithm called AC-FGM that can achieve an optimal $mathcalO (1/k2)$ rate of convergence for smooth convex optimization.
arXiv Detail & Related papers (2023-10-16T05:26:03Z) - Analyzing Inexact Hypergradients for Bilevel Learning [0.09669369645900441]
We introduce a unified framework for computing hypergradients that generalizes existing methods based on the implicit function theorem and automatic differentiation/backpropagation.
Our numerical results show that, surprisingly, for efficient bilevel optimization, the choice of hypergradient algorithm is at least as important as the choice of lower-level solver.
arXiv Detail & Related papers (2023-01-11T23:54:27Z) - Alternating Differentiation for Optimization Layers [133.2668019610731]
We develop a new framework, named Alternating Differentiation (Alt-Diff), that differentiates optimization problems.
We show that Alt-Diff substantially decreases the dimensions of the Jacobian matrix especially for optimization with large-scale constraints.
We also propose to truncate Alt-Diff to further accelerate the computational speed.
arXiv Detail & Related papers (2022-10-03T11:32:13Z) - The Curse of Unrolling: Rate of Differentiating Through Optimization [35.327233435055305]
Un differentiation approximates the solution using an iterative solver and differentiates it through the computational path.
We show that we can either 1) choose a large learning rate leading to a fast convergence but accept that the algorithm may have an arbitrarily long burn-in phase or 2) choose a smaller learning rate leading to an immediate but slower convergence.
arXiv Detail & Related papers (2022-09-27T09:27:29Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic
Optimization Problems [12.010310883787911]
We analyze a new family of adaptive subgradient methods for solving an important class of weakly convex (possibly nonsmooth) optimization problems.
Experimental results indicate how the proposed algorithms empirically outperform its zerothorder gradient descent and its design variant.
arXiv Detail & Related papers (2020-05-19T07:44:52Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.