HoP: Homeomorphic Polar Learning for Hard Constrained Optimization
- URL: http://arxiv.org/abs/2502.00304v1
- Date: Sat, 01 Feb 2025 03:59:15 GMT
- Title: HoP: Homeomorphic Polar Learning for Hard Constrained Optimization
- Authors: Ke Deng, Hanwen Zhang, Jin Lu, Haijian Sun,
- Abstract summary: Constrained optimization demands highly efficient synthetic training approaches.
As a data-driven learning method, L2O leverages neural networks efficiently produce approximate solutions.
HoP achieves solutions closer to the optimum than existing L2O methods.
In all cases, HoP achieves solutions closer to the optimum than existing L2O methods.
- Score: 3.8166443770130822
- License:
- Abstract: Constrained optimization demands highly efficient solvers which promotes the development of learn-to-optimize (L2O) approaches. As a data-driven method, L2O leverages neural networks to efficiently produce approximate solutions. However, a significant challenge remains in ensuring both optimality and feasibility of neural networks' output. To tackle this issue, we introduce Homeomorphic Polar Learning (HoP) to solve the star-convex hard-constrained optimization by embedding homeomorphic mapping in neural networks. The bijective structure enables end-to-end training without extra penalty or correction. For performance evaluation, we evaluate HoP's performance across a variety of synthetic optimization tasks and real-world applications in wireless communications. In all cases, HoP achieves solutions closer to the optimum than existing L2O methods while strictly maintaining feasibility.
Related papers
- Self-Supervised Learning of Iterative Solvers for Constrained Optimization [0.0]
We propose a learning-based iterative solver for constrained optimization.
It can obtain very fast and accurate solutions by customizing the solver to a specific parametric optimization problem.
A novel loss function based on the Karush-Kuhn-Tucker conditions of optimality is introduced, enabling fully self-supervised training of both neural networks.
arXiv Detail & Related papers (2024-09-12T14:17:23Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Reducing the Need for Backpropagation and Discovering Better Optima With
Explicit Optimizations of Neural Networks [4.807347156077897]
We propose a computationally efficient alternative for optimizing neural networks.
We derive an explicit solution to a simple feed-forward language model.
We show that explicit solutions perform near-optimality in experiments.
arXiv Detail & Related papers (2023-11-13T17:38:07Z) - Toward Rapid, Optimal, and Feasible Power Dispatch through Generalized
Neural Mapping [0.0]
We propose LOOP-LC 2.0 as a learning-based approach for solving the power dispatch problem.
A notable advantage of the LOOP-LC 2.0 framework is its ability to ensure near-optimality and strict feasibility of solutions.
We demonstrate the effectiveness of the LOOP-LC 2.0 methodology in terms of training speed, computational time, optimality, and solution feasibility.
arXiv Detail & Related papers (2023-11-08T17:02:53Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Approaching Globally Optimal Energy Efficiency in Interference Networks
via Machine Learning [22.926877147296594]
This work presents a machine learning approach to optimize the energy efficiency (EE) in a multi-cell wireless network.
Results show that the method achieves an EE close to the optimum by the branch-and- computation testing.
arXiv Detail & Related papers (2022-11-25T08:36:34Z) - Learning for Robust Combinatorial Optimization: Algorithm and
Application [26.990988571097827]
Learning to optimize (L2O) has emerged as a promising approach to solving optimization problems by exploiting the strong prediction power of neural networks.
In this paper, we propose a novel learning-based optimization, called LRCO, which quickly outputs a robust solution in the presence of uncertain context.
Our results highlight that LRCO can greatly reduce the worst-case cost and runtime, while having a very low complexity.
arXiv Detail & Related papers (2021-12-20T07:58:50Z) - The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural
Networks: an Exact Characterization of the Optimal Solutions [51.60996023961886]
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints.
Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces.
arXiv Detail & Related papers (2020-06-10T15:38:30Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.