CPMLHO:Hyperparameter Tuning via Cutting Plane and Mixed-Level
Optimization
- URL: http://arxiv.org/abs/2212.06150v1
- Date: Sun, 11 Dec 2022 07:46:19 GMT
- Title: CPMLHO:Hyperparameter Tuning via Cutting Plane and Mixed-Level
Optimization
- Authors: Shuo Yang, Yang Jiao, Shaoyu Dou, Mana Zheng, Chen Zhu
- Abstract summary: CPMLHO is a new hyperparameter optimization method using cutting plane method and mixed-level objective function.
Compared to existing methods, our method can automatically update the hyperparameters in the training process.
- Score: 24.39326333982495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The hyperparameter optimization of neural network can be expressed as a
bilevel optimization problem. The bilevel optimization is used to automatically
update the hyperparameter, and the gradient of the hyperparameter is the
approximate gradient based on the best response function. Finding the best
response function is very time consuming. In this paper we propose CPMLHO, a
new hyperparameter optimization method using cutting plane method and
mixed-level objective function.The cutting plane is added to the inner layer to
constrain the space of the response function. To obtain more accurate
hypergradient,the mixed-level can flexibly adjust the loss function by using
the loss of the training set and the verification set. Compared to existing
methods, the experimental results show that our method can automatically update
the hyperparameters in the training process, and can find more superior
hyperparameters with higher accuracy and faster convergence.
Related papers
- Enhancing Hypergradients Estimation: A Study of Preconditioning and
Reparameterization [49.73341101297818]
Bilevel optimization aims to optimize an outer objective function that depends on the solution to an inner optimization problem.
The conventional method to compute the so-called hypergradient of the outer problem is to use the Implicit Function Theorem (IFT)
We study the error of the IFT method and analyze two strategies to reduce this error.
arXiv Detail & Related papers (2024-02-26T17:09:18Z) - Optimization using Parallel Gradient Evaluations on Multiple Parameters [51.64614793990665]
We propose a first-order method for convex optimization, where gradients from multiple parameters can be used during each step of gradient descent.
Our method uses gradients from multiple parameters in synergy to update these parameters together towards the optima.
arXiv Detail & Related papers (2023-02-06T23:39:13Z) - A Globally Convergent Gradient-based Bilevel Hyperparameter Optimization
Method [0.0]
We propose a gradient-based bilevel method for solving the hyperparameter optimization problem.
We show that the proposed method converges with lower computation and leads to models that generalize better on the testing set.
arXiv Detail & Related papers (2022-08-25T14:25:16Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Online Hyperparameter Meta-Learning with Hypergradient Distillation [59.973770725729636]
gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optimization.
We propose a novel HO method that can overcome these limitations, by approximating the second-order term with knowledge distillation.
arXiv Detail & Related papers (2021-10-06T05:14:53Z) - Implicit differentiation for fast hyperparameter selection in non-smooth
convex learning [87.60600646105696]
We study first-order methods when the inner optimization problem is convex but non-smooth.
We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian.
arXiv Detail & Related papers (2021-05-04T17:31:28Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Efficient hyperparameter optimization by way of PAC-Bayes bound
minimization [4.191847852775072]
We present an alternative objective that is equivalent to a Probably Approximately Correct-Bayes (PAC-Bayes) bound on the expected out-of-sample error.
We then devise an efficient gradient-based algorithm to minimize this objective.
arXiv Detail & Related papers (2020-08-14T15:54:51Z) - A Gradient-based Bilevel Optimization Approach for Tuning
Hyperparameters in Machine Learning [0.0]
We propose a bilevel solution method for solving the hyperparameter optimization problem.
The proposed method is general and can be easily applied to any class of machine learning algorithms.
We discuss the theory behind the proposed algorithm and perform extensive computational study on two datasets.
arXiv Detail & Related papers (2020-07-21T18:15:08Z) - Automatic Setting of DNN Hyper-Parameters by Mixing Bayesian
Optimization and Tuning Rules [0.6875312133832078]
We build a new algorithm for evaluating and analyzing the results of the network on the training and validation sets.
We use a set of tuning rules to add new hyper-parameters and/or to reduce the hyper- parameter search space to select a better combination.
arXiv Detail & Related papers (2020-06-03T08:53:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.