Resilient Constrained Learning
- URL: http://arxiv.org/abs/2306.02426v4
- Date: Thu, 11 Jan 2024 15:30:24 GMT
- Title: Resilient Constrained Learning
- Authors: Ignacio Hounie, Alejandro Ribeiro, Luiz F. O. Chamon
- Abstract summary: This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
- Score: 94.27081585149836
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: When deploying machine learning solutions, they must satisfy multiple
requirements beyond accuracy, such as fairness, robustness, or safety. These
requirements are imposed during training either implicitly, using penalties, or
explicitly, using constrained optimization methods based on Lagrangian duality.
Either way, specifying requirements is hindered by the presence of compromises
and limited prior knowledge about the data. Furthermore, their impact on
performance can often only be evaluated by actually solving the learning
problem. This paper presents a constrained learning approach that adapts the
requirements while simultaneously solving the learning task. To do so, it
relaxes the learning constraints in a way that contemplates how much they
affect the task at hand by balancing the performance gains obtained from the
relaxation against a user-defined cost of that relaxation. We call this
approach resilient constrained learning after the term used to describe
ecological systems that adapt to disruptions by modifying their operation. We
show conditions under which this balance can be achieved and introduce a
practical algorithm to compute it, for which we derive approximation and
generalization guarantees. We showcase the advantages of this resilient
learning method in image classification tasks involving multiple potential
invariances and in heterogeneous federated learning.
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Resilient Constrained Reinforcement Learning [87.4374430686956]
We study a class of constrained reinforcement learning (RL) problems in which multiple constraint specifications are not identified before study.
It is challenging to identify appropriate constraint specifications due to the undefined trade-off between the reward training objective and the constraint satisfaction.
We propose a new constrained RL approach that searches for policy and constraint specifications together.
arXiv Detail & Related papers (2023-12-28T18:28:23Z) - Primal Dual Continual Learning: Balancing Stability and Plasticity through Adaptive Memory Allocation [86.8475564814154]
We show that it is both possible and beneficial to undertake the constrained optimization problem directly.
We focus on memory-based methods, where a small subset of samples from previous tasks can be stored in a replay buffer.
We show that dual variables indicate the sensitivity of the optimal value of the continual learning problem with respect to constraint perturbations.
arXiv Detail & Related papers (2023-09-29T21:23:27Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Robust Deep Reinforcement Learning Scheduling via Weight Anchoring [7.570246812206769]
We use weight anchoring to cultivate and fixate desired behavior in Neural Networks.
Weight anchoring may be used to find a solution to a learning problem that is nearby the solution of another learning problem.
Results show that this method provides performance comparable to the state of the art of augmenting a simulation environment.
arXiv Detail & Related papers (2023-04-20T09:30:23Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.