Meta-learning based Alternating Minimization Algorithm for Non-convex
Optimization
- URL: http://arxiv.org/abs/2009.04899v7
- Date: Sun, 26 Jun 2022 13:30:39 GMT
- Title: Meta-learning based Alternating Minimization Algorithm for Non-convex
Optimization
- Authors: Jingyuan Xia, Shengxi Li, Jun-Jie Huang, Imad Jaimoukha and Deniz
Gunduz
- Abstract summary: We propose a novel solution for challenging non-problems of multiple variables.
Our proposed approach is able to achieve effective iterations in cases while other methods would typically fail.
- Score: 9.774392581946108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel solution for non-convex problems of
multiple variables, especially for those typically solved by an alternating
minimization (AM) strategy that splits the original optimization problem into a
set of sub-problems corresponding to each variable, and then iteratively
optimize each sub-problem using a fixed updating rule. However, due to the
intrinsic non-convexity of the original optimization problem, the optimization
can usually be trapped into spurious local minimum even when each sub-problem
can be optimally solved at each iteration. Meanwhile, learning-based
approaches, such as deep unfolding algorithms, are highly limited by the lack
of labelled data and restricted explainability. To tackle these issues, we
propose a meta-learning based alternating minimization (MLAM) method, which
aims to minimize a partial of the global losses over iterations instead of
carrying minimization on each sub-problem, and it tends to learn an adaptive
strategy to replace the handcrafted counterpart resulting in advance on
superior performance. Meanwhile, the proposed MLAM still maintains the original
algorithmic principle, which contributes to a better interpretability. We
evaluate the proposed method on two representative problems, namely, bi-linear
inverse problem: matrix completion, and non-linear problem: Gaussian mixture
models. The experimental results validate that our proposed approach
outperforms AM-based methods in standard settings, and is able to achieve
effective optimization in challenging cases while other comparing methods would
typically fail.
Related papers
- From Inverse Optimization to Feasibility to ERM [11.731853838892487]
We study the contextual inverse setting that utilizes additional contextual information to better predict parameters.
We experimentally validate our approach on synthetic and real-world problems and demonstrate improved performance compared to existing methods.
arXiv Detail & Related papers (2024-02-27T21:06:42Z) - Dynamic Incremental Optimization for Best Subset Selection [15.8362578568708]
Best subset selection is considered the gold standard for many learning problems.
An efficient subset-dual algorithm is developed based on the primal and dual problem structures.
arXiv Detail & Related papers (2024-02-04T02:26:40Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - A Constrained Optimization Approach to Bilevel Optimization with
Multiple Inner Minima [49.320758794766185]
We propose a new approach, which convert the bilevel problem to an equivalent constrained optimization, and then the primal-dual algorithm can be used to solve the problem.
Such an approach enjoys a few advantages including (a) addresses the multiple inner minima challenge; (b) fully first-order efficiency without Jacobian computations.
arXiv Detail & Related papers (2022-03-01T18:20:01Z) - A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy [14.102326122777475]
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving multiobjective optimization problems (MOPs)
We propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points.
arXiv Detail & Related papers (2021-10-27T02:07:08Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Follow the bisector: a simple method for multi-objective optimization [65.83318707752385]
We consider optimization problems, where multiple differentiable losses have to be minimized.
The presented method computes descent direction in every iteration to guarantee equal relative decrease of objective functions.
arXiv Detail & Related papers (2020-07-14T09:50:33Z) - Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic
Optimization Problems [12.010310883787911]
We analyze a new family of adaptive subgradient methods for solving an important class of weakly convex (possibly nonsmooth) optimization problems.
Experimental results indicate how the proposed algorithms empirically outperform its zerothorder gradient descent and its design variant.
arXiv Detail & Related papers (2020-05-19T07:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.