Lagrangian Duality for Constrained Deep Learning
- URL: http://arxiv.org/abs/2001.09394v2
- Date: Mon, 6 Apr 2020 15:41:19 GMT
- Title: Lagrangian Duality for Constrained Deep Learning
- Authors: Ferdinando Fioretto, Pascal Van Hentenryck, Terrence WK Mak, Cuong
Tran, Federico Baldo, Michele Lombardi
- Abstract summary: This paper explores the potential of Lagrangian duality for learning applications that feature complex constraints.
In energy domains, the combination of Lagrangian duality and deep learning can be used to obtain state-of-the-art results.
In transprecision computing, Lagrangian duality can complement deep learning to impose monotonicity constraints on the predictor.
- Score: 51.2216183850835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the potential of Lagrangian duality for learning
applications that feature complex constraints. Such constraints arise in many
science and engineering domains, where the task amounts to learning
optimization problems which must be solved repeatedly and include hard physical
and operational constraints. The paper also considers applications where the
learning task must enforce constraints on the predictor itself, either because
they are natural properties of the function to learn or because it is desirable
from a societal standpoint to impose them. This paper demonstrates
experimentally that Lagrangian duality brings significant benefits for these
applications. In energy domains, the combination of Lagrangian duality and deep
learning can be used to obtain state-of-the-art results to predict optimal
power flows, in energy systems, and optimal compressor settings, in gas
networks. In transprecision computing, Lagrangian duality can complement deep
learning to impose monotonicity constraints on the predictor without
sacrificing accuracy. Finally, Lagrangian duality can be used to enforce
fairness constraints on a predictor and obtain state-of-the-art results when
minimizing disparate treatments.
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Learning Lagrangian Multipliers for the Travelling Salesman Problem [12.968608204035611]
We propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure.
We apply this technique to the well-known Held-Karp Lagrangian relaxation for the travelling salesman problem.
In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality.
arXiv Detail & Related papers (2023-12-22T17:09:34Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Neural Networks with Quantization Constraints [111.42313650830248]
We present a constrained learning approach to quantization training.
We show that the resulting problem is strongly dual and does away with gradient estimations.
We demonstrate that the proposed approach exhibits competitive performance in image classification tasks.
arXiv Detail & Related papers (2022-10-27T17:12:48Z) - Functional Generalized Empirical Likelihood Estimation for Conditional
Moment Restrictions [19.39005034948997]
We propose a new estimation method based on generalized empirical likelihood (GEL)
GEL provides a more general framework and has been shown to enjoy favorable small-sample properties compared to GMM-based estimators.
We provide kernel- and neural network-based implementations of the estimator, which achieve state-of-the-art empirical performance on two conditional moment restriction problems.
arXiv Detail & Related papers (2022-07-11T11:02:52Z) - A Stochastic Composite Augmented Lagrangian Method For Reinforcement
Learning [9.204659134755795]
We consider the linear programming (LP) formulation for deep reinforcement learning.
The augmented Lagrangian method suffers the double-sampling obstacle in solving the LP.
A deep parameterized augment Lagrangian method is proposed.
arXiv Detail & Related papers (2021-05-20T13:08:06Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.