Learning Proximal Operators to Discover Multiple Optima
- URL: http://arxiv.org/abs/2201.11945v1
- Date: Fri, 28 Jan 2022 05:53:28 GMT
- Title: Learning Proximal Operators to Discover Multiple Optima
- Authors: Lingxiao Li, Noam Aigerman, Vladimir G. Kim, Jiajin Li, Kristjan
Greenewald, Mikhail Yurochkin, Justin Solomon
- Abstract summary: We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
- Score: 66.98045013486794
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Finding multiple solutions of non-convex optimization problems is a
ubiquitous yet challenging task. Typical existing solutions either apply
single-solution optimization methods from multiple random initial guesses or
search in the vicinity of found solutions using ad hoc heuristics. We present
an end-to-end method to learn the proximal operator across a family of
non-convex problems, which can then be used to recover multiple solutions for
unseen problems at test time. Our method only requires access to the objectives
without needing the supervision of ground truth solutions. Notably, the added
proximal regularization term elevates the convexity of our formulation: by
applying recent theoretical results, we show that for weakly-convex objectives
and under mild regularity conditions, training of the proximal operator
converges globally in the over-parameterized setting. We further present a
benchmark for multi-solution optimization including a wide range of
applications and evaluate our method to demonstrate its effectiveness.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Self-Supervised Learning of Iterative Solvers for Constrained Optimization [0.0]
We propose a learning-based iterative solver for constrained optimization.
It can obtain very fast and accurate solutions by customizing the solver to a specific parametric optimization problem.
A novel loss function based on the Karush-Kuhn-Tucker conditions of optimality is introduced, enabling fully self-supervised training of both neural networks.
arXiv Detail & Related papers (2024-09-12T14:17:23Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Smooth Tchebycheff Scalarization for Multi-Objective Optimization [15.047246588148495]
Multi-objective optimization problems can be found in many real-world applications, where the objectives often conflict each other and cannot be optimized by a single solution.
We propose a lightweight and efficient smooth Tchebycheff scalarization approach for gradient-based multi-objective optimization.
arXiv Detail & Related papers (2024-02-29T12:03:05Z) - Continuous Tensor Relaxation for Finding Diverse Solutions in Combinatorial Optimization Problems [0.6906005491572401]
This study introduces Continual Anne Relaxationing (CTRA) for unsupervised-learning (UL)-based CO solvers.
CTRA is a computationally efficient framework for finding diverse solutions in a single training run.
Numerical experiments show that CTRA enables UL-based solvers to find these diverse solutions much faster than repeatedly running existing solvers.
arXiv Detail & Related papers (2024-02-03T15:31:05Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Extracting Optimal Solution Manifolds using Constrained Neural
Optimization [6.800113407368289]
Constrained Optimization solution algorithms are restricted to point based solutions.
We present an approach for extracting optimal sets as approximate, where unmodified non-informed constraints are defined.
arXiv Detail & Related papers (2020-09-13T15:37:44Z) - Learning the Solution Manifold in Optimization and Its Application in
Motion Planning [4.177892889752434]
We learn manifold on the variable such as the variable such model represents an infinite set of solutions.
In our framework, we reduce problem estimation by using this importance.
We apply to motion-planning problems, which involve the optimization of high-dimensional parameters.
arXiv Detail & Related papers (2020-07-24T08:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.