Teaching the Old Dog New Tricks: Supervised Learning with Constraints
- URL: http://arxiv.org/abs/2002.10766v2
- Date: Fri, 26 Feb 2021 16:39:24 GMT
- Title: Teaching the Old Dog New Tricks: Supervised Learning with Constraints
- Authors: Fabrizio Detassis, Michele Lombardi, Michela Milano
- Abstract summary: Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems.
Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output.
Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver.
- Score: 18.88930622054883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adding constraint support in Machine Learning has the potential to address
outstanding issues in data-driven AI systems, such as safety and fairness.
Existing approaches typically apply constrained optimization techniques to ML
training, enforce constraint satisfaction by adjusting the model design, or use
constraints to correct the output. Here, we investigate a different,
complementary, strategy based on "teaching" constraint satisfaction to a
supervised ML method via the direct use of a state-of-the-art constraint
solver: this enables taking advantage of decades of research on constrained
optimization with limited effort. In practice, we use a decomposition scheme
alternating master steps (in charge of enforcing the constraints) and learner
steps (where any supervised ML model and training algorithm can be employed).
The process leads to approximate constraint satisfaction in general, and
convergence properties are difficult to establish; despite this fact, we found
empirically that even a na\"ive setup of our approach performs well on ML tasks
with fairness constraints, and on classical datasets with synthetic
constraints.
Related papers
- Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Deep Neural Network for Constraint Acquisition through Tailored Loss
Function [0.0]
The significance of learning constraints from data is underscored by its potential applications in real-world problem-solving.
This work introduces a novel approach grounded in Deep Neural Network (DNN) based on Symbolic Regression.
arXiv Detail & Related papers (2024-03-04T13:47:33Z) - LeTO: Learning Constrained Visuomotor Policy with Differentiable Trajectory Optimization [1.1602089225841634]
This paper introduces LeTO, a method for learning constrained visuomotor policy via differentiable trajectory optimization.
In simulation, LeTO achieves a success rate comparable to state-of-the-art imitation learning methods.
In real-world experiments, we deployed LeTO to handle constraints-critical tasks.
arXiv Detail & Related papers (2024-01-30T23:18:35Z) - Resilient Constrained Reinforcement Learning [87.4374430686956]
We study a class of constrained reinforcement learning (RL) problems in which multiple constraint specifications are not identified before study.
It is challenging to identify appropriate constraint specifications due to the undefined trade-off between the reward training objective and the constraint satisfaction.
We propose a new constrained RL approach that searches for policy and constraint specifications together.
arXiv Detail & Related papers (2023-12-28T18:28:23Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Policy Optimization with Linear Temporal Logic Constraints [37.27882290236194]
We study the problem of policy optimization with linear temporal logic constraints.
We develop a model-based approach that enjoys a sample complexity analysis for guaranteeing both task satisfaction and cost optimality.
arXiv Detail & Related papers (2022-06-20T02:58:02Z) - Constrained Model-Free Reinforcement Learning for Process Optimization [0.0]
Reinforcement learning (RL) is a control approach that can handle nonlinear optimal control problems.
Despite the promise exhibited, RL has yet to see marked translation to industrial practice.
We propose an 'oracle'-assisted constrained Q-learning algorithm that guarantees the satisfaction of joint chance constraints with a high probability.
arXiv Detail & Related papers (2020-11-16T13:16:22Z) - Responsive Safety in Reinforcement Learning by PID Lagrangian Methods [74.49173841304474]
Lagrangian methods exhibit oscillations and overshoot which, when applied to safe reinforcement learning, leads to constraint-violating behavior.
We propose a novel Lagrange multiplier update method that utilizes derivatives of the constraint function.
We apply our PID Lagrangian methods in deep RL, setting a new state of the art in Safety Gym, a safe RL benchmark.
arXiv Detail & Related papers (2020-07-08T08:43:14Z) - Learning Constraints from Locally-Optimal Demonstrations under Cost
Function Uncertainty [6.950510860295866]
We present an algorithm for learning parametric constraints from locally-optimal demonstrations, where the cost function being optimized is uncertain to the learner.
Our method uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations within a mixed integer linear program (MILP) to learn constraints which are consistent with the local optimality of the demonstrations.
We evaluate our method on high-dimensional constraints and systems by learning constraints for 7-DOF arm and quadrotor examples, show that it outperforms competing constraint-learning approaches, and can be effectively used to plan new constraint-satisfying trajectories in the environment
arXiv Detail & Related papers (2020-01-25T15:57:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.