Differentiable Projection-based Learn to Optimize in Wireless Network-Part I: Convex Constrained (Non-)Convex Programming
- URL: http://arxiv.org/abs/2502.00053v1
- Date: Wed, 29 Jan 2025 11:52:27 GMT
- Title: Differentiable Projection-based Learn to Optimize in Wireless Network-Part I: Convex Constrained (Non-)Convex Programming
- Authors: Xiucheng Wang, Xuan Zhao, Nan Cheng,
- Abstract summary: This paper addresses a class of (non-)feasible optimization problems subject to general convex constraints.
Traditional convex optimization methods often struggle to efficiently handle these problems in their most general form.
- Score: 15.689556794350674
- License:
- Abstract: This paper addresses a class of (non-)convex optimization problems subject to general convex constraints, which pose significant challenges for traditional methods due to their inherent non-convexity and diversity. Conventional convex optimization-based solvers often struggle to efficiently handle these problems in their most general form. While neural network (NN)-based approaches offer a promising alternative, ensuring the feasibility of NN-generated solutions and effectively training the NN remain key hurdles, largely because finite-capacity networks can produce infeasible outputs. To overcome these issues, we propose a projection-based method that projects any infeasible NN output onto the feasible domain, thus guaranteeing strict adherence to the constraints without compromising the NN's optimization capability. Furthermore, we derive the objective function values for both the raw NN outputs and their projected counterparts, along with the gradients of these values with respect to the NN parameters. This derivation enables label-free (unsupervised) training, reducing reliance on labeled data and improving scalability. Experimental results demonstrate that the proposed projection-based method consistently ensures feasibility.
Related papers
- Towards graph neural networks for provably solving convex optimization problems [5.966097889241178]
We propose an iterative MPNN framework to solve convex optimization problems with provable feasibility guarantees.
Experimental results show that our approach outperforms existing neural baselines in solution quality and feasibility.
arXiv Detail & Related papers (2025-02-04T16:11:41Z) - Reliable Projection Based Unsupervised Learning for Semi-Definite QCQP with Application of Beamforming Optimization [11.385703484113552]
In this paper, we investigate a special class of quadratic (QCQP) with semi-definite constraints.
We propose a neural network (NN) as a promising method to obtain a high-performing constraint solution.
Unsupervised learning is used, so the NN can be effectively efficiently without labels.
arXiv Detail & Related papers (2024-07-04T06:26:01Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization [5.412875005737914]
We propose deeprange dual embedding (DeepLDE) as a fast optimal approximators incorporating neural networks (NNs)
We prove the convergence of DeepLDE and primal, nondual-learning method to impose inequality constraints.
We show that the proposed DeepLDE solutions achieve the optimal Lag gap among all the smallest NN-based approaches.
arXiv Detail & Related papers (2023-06-11T13:19:37Z) - Power Control with QoS Guarantees: A Differentiable Projection-based
Unsupervised Learning Framework [14.518558523319518]
Deep neural networks (DNNs) are emerging as a potential solution to solve NP-hard wireless resource allocation problems.
We propose a novel unsupervised learning framework to solve the classical power control problem in a multi-user channel.
We show that the proposed solutions not only improve the data rate but also achieve zero constraint violation probability, compared to the existing computations.
arXiv Detail & Related papers (2023-05-31T14:11:51Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - FISAR: Forward Invariant Safe Reinforcement Learning with a Deep Neural
Network-Based Optimize [44.65622657676026]
We take constraints as Lyapunov functions and impose new linear constraints on the policy parameters' updating dynamics.
Because the new guaranteed-feasible constraints are imposed on the updating dynamics instead of the original policy parameters, classic optimization algorithms are no longer applicable.
arXiv Detail & Related papers (2020-06-19T21:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.