AL-CoLe: Augmented Lagrangian for Constrained Learning
- URL: http://arxiv.org/abs/2510.20995v2
- Date: Tue, 28 Oct 2025 21:25:00 GMT
- Title: AL-CoLe: Augmented Lagrangian for Constrained Learning
- Authors: Ignacio Boero, Ignacio Hounie, Alejandro Ribeiro,
- Abstract summary: Despite the non-ity of most modern machine learning parameterizations, Lagrangian duality has become a popular tool for addressing constrained learning problems.<n>We demonstrate its effectiveness on constrained classification tasks.
- Score: 79.45233551350152
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the non-convexity of most modern machine learning parameterizations, Lagrangian duality has become a popular tool for addressing constrained learning problems. We revisit Augmented Lagrangian methods, which aim to mitigate the duality gap in non-convex settings while requiring only minimal modifications, and have remained comparably unexplored in constrained learning settings. We establish strong duality results under mild conditions, prove convergence of dual ascent algorithms to feasible and optimal primal solutions, and provide PAC-style generalization guarantees. Finally, we demonstrate its effectiveness on fairness constrained classification tasks.
Related papers
- Dual Optimistic Ascent (PI Control) is the Augmented Lagrangian Method in Disguise [16.383773324475538]
We show that dual optimistic ascent on the Lagrangian is equivalent to gradient descent-ascent on the Augmented Lagrangian.<n>This finding allows us to transfer the robust theoretical guarantees of the ALM to the dual optimistic setting, proving it converges linearly to all local solutions.<n>Our work closes a critical gap between the empirical success of dual optimistic methods and their theoretical foundation.
arXiv Detail & Related papers (2025-09-26T15:41:20Z) - Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Algorithm for Constrained Markov Decision Process with Linear
Convergence [55.41644538483948]
An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs.
A new dual approach is proposed with the integration of two ingredients: entropy regularized policy and Vaidya's dual.
The proposed approach is shown to converge (with linear rate) to the global optimum.
arXiv Detail & Related papers (2022-06-03T16:26:38Z) - Efficient Performance Bounds for Primal-Dual Reinforcement Learning from
Demonstrations [1.0609815608017066]
We consider large-scale Markov decision processes with an unknown cost function and address the problem of learning a policy from a finite set of expert demonstrations.
Existing inverse reinforcement learning methods come with strong theoretical guarantees, but are computationally expensive.
We introduce a novel bilinear saddle-point framework using Lagrangian duality to bridge the gap between theory and practice.
arXiv Detail & Related papers (2021-12-28T05:47:24Z) - A Stochastic Composite Augmented Lagrangian Method For Reinforcement
Learning [9.204659134755795]
We consider the linear programming (LP) formulation for deep reinforcement learning.
The augmented Lagrangian method suffers the double-sampling obstacle in solving the LP.
A deep parameterized augment Lagrangian method is proposed.
arXiv Detail & Related papers (2021-05-20T13:08:06Z) - Lagrangian Duality for Constrained Deep Learning [51.2216183850835]
This paper explores the potential of Lagrangian duality for learning applications that feature complex constraints.
In energy domains, the combination of Lagrangian duality and deep learning can be used to obtain state-of-the-art results.
In transprecision computing, Lagrangian duality can complement deep learning to impose monotonicity constraints on the predictor.
arXiv Detail & Related papers (2020-01-26T03:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.