Adaptive multi-gradient methods for quasiconvex vector optimization and
applications to multi-task learning
- URL: http://arxiv.org/abs/2402.06224v1
- Date: Fri, 9 Feb 2024 07:20:14 GMT
- Title: Adaptive multi-gradient methods for quasiconvex vector optimization and
applications to multi-task learning
- Authors: Nguyen Anh Minh and Le Dung Muu and Tran Ngoc Thang
- Abstract summary: We present an adaptive step-size method, which does not include line-search techniques, for solving a wide class of nonobjective multi-size programming problems.
We prove an unbounded convergence set on modest assumptions.
We apply the proposed technique to some multi-task experiments to show efficacy for largescale challenges.
- Score: 1.03590082373586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an adaptive step-size method, which does not include line-search
techniques, for solving a wide class of nonconvex multiobjective programming
problems on an unbounded constraint set. We also prove convergence of a general
approach under modest assumptions. More specifically, the convexity criterion
might not be satisfied by the objective function. Unlike descent line-search
algorithms, it does not require an initial step-size to be determined by a
previously determined Lipschitz constant. The process's primary characteristic
is its gradual step-size reduction up until a predetermined condition is met.
It can be specifically applied to offer an innovative multi-gradient projection
method for unbounded constrained optimization issues. Preliminary findings from
a few computational examples confirm the accuracy of the strategy. We apply the
proposed technique to some multi-task learning experiments to show its efficacy
for large-scale challenges.
Related papers
- Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints [9.301728976515255]
This article provides new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold.
We also consider variance reduction algorithms when the cost function is an average of many functions.
arXiv Detail & Related papers (2023-03-29T07:36:54Z) - Self-adaptive algorithms for quasiconvex programming and applications to
machine learning [0.0]
We provide a self-adaptive step-size strategy that does not include convex line-search techniques and a generic approach under mild assumptions.
The proposed method is verified by preliminary results from some computational examples.
To demonstrate the effectiveness of the proposed technique for large-scale problems, we apply it to some experiments on machine learning.
arXiv Detail & Related papers (2022-12-13T05:30:29Z) - An Adaptive Incremental Gradient Method With Support for Non-Euclidean
Norms [19.41328109094503]
We propose and analyze several novel adaptive variants of the popular SAGA algorithm.
We establish its convergence guarantees under general settings.
We improve the analysis of SAGA to support non-Euclidean norms.
arXiv Detail & Related papers (2022-04-28T09:43:07Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Unified Convergence Analysis for Adaptive Optimization with Moving Average Estimator [75.05106948314956]
We show that an increasing large momentum parameter for the first-order moment is sufficient for adaptive scaling.
We also give insights for increasing the momentum in a stagewise manner in accordance with stagewise decreasing step size.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.