Extracting Optimal Solution Manifolds using Constrained Neural
Optimization
- URL: http://arxiv.org/abs/2009.06024v4
- Date: Sun, 3 Jan 2021 18:46:54 GMT
- Title: Extracting Optimal Solution Manifolds using Constrained Neural
Optimization
- Authors: Gurpreet Singh, Soumyajit Gupta, Matthew Lease
- Abstract summary: Constrained Optimization solution algorithms are restricted to point based solutions.
We present an approach for extracting optimal sets as approximate, where unmodified non-informed constraints are defined.
- Score: 6.800113407368289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constrained Optimization solution algorithms are restricted to point based
solutions. In practice, single or multiple objectives must be satisfied,
wherein both the objective function and constraints can be non-convex resulting
in multiple optimal solutions. Real world scenarios include intersecting
surfaces as Implicit Functions, Hyperspectral Unmixing and Pareto Optimal
fronts. Local or global convexification is a common workaround when faced with
non-convex forms. However, such an approach is often restricted to a strict
class of functions, deviation from which results in sub-optimal solution to the
original problem. We present neural solutions for extracting optimal sets as
approximate manifolds, where unmodified, non-convex objectives and constraints
are defined as modeler guided, domain-informed $L_2$ loss function. This
promotes interpretability since modelers can confirm the results against known
analytical forms in their specific domains. We present synthetic and realistic
cases to validate our approach and compare against known solvers for
bench-marking in terms of accuracy and computational efficiency.
Related papers
- Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization [34.628967272528044]
This paper investigates projection-free algorithms for constrained multi-level optimization functions.
By using a stage-wise adaptation, we obtain complexities for convex and strongly convex functions.
arXiv Detail & Related papers (2024-06-06T06:56:56Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Goal Seeking Quadratic Unconstrained Binary Optimization [0.5439020425819]
We present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
In this paper, we present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
arXiv Detail & Related papers (2021-03-24T03:03:13Z) - A Hybrid 2-stage Neural Optimization for Pareto Front Extraction [3.918940900258555]
A major obstacle to optimal trade-off solutions is that they don't always converge to each other.
We propose a two-stage approach that is accurate and cost-effective.
arXiv Detail & Related papers (2021-01-27T20:56:19Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Learning the Solution Manifold in Optimization and Its Application in
Motion Planning [4.177892889752434]
We learn manifold on the variable such as the variable such model represents an infinite set of solutions.
In our framework, we reduce problem estimation by using this importance.
We apply to motion-planning problems, which involve the optimization of high-dimensional parameters.
arXiv Detail & Related papers (2020-07-24T08:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.