Probably Approximately Correct Constrained Learning
- URL: http://arxiv.org/abs/2006.05487v2
- Date: Wed, 17 Feb 2021 22:42:15 GMT
- Title: Probably Approximately Correct Constrained Learning
- Authors: Luiz F. O. Chamon and Alejandro Ribeiro
- Abstract summary: We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
- Score: 135.48447120228658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As learning solutions reach critical applications in social, industrial, and
medical domains, the need to curtail their behavior has become paramount. There
is now ample evidence that without explicit tailoring, learning can lead to
biased, unsafe, and prejudiced solutions. To tackle these problems, we develop
a generalization theory of constrained learning based on the probably
approximately correct (PAC) learning framework. In particular, we show that
imposing requirements does not make a learning problem harder in the sense that
any PAC learnable class is also PAC constrained learnable using a constrained
counterpart of the empirical risk minimization (ERM) rule. For typical
parametrized models, however, this learner involves solving a constrained
non-convex optimization program for which even obtaining a feasible solution is
challenging. To overcome this issue, we prove that under mild conditions the
empirical dual problem of constrained learning is also a PAC constrained
learner that now leads to a practical constrained learning algorithm based
solely on solving unconstrained problems. We analyze the generalization
properties of this solution and use it to illustrate how constrained learning
can address problems in fair and robust classification.
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - On data-driven chance constraint learning for mixed-integer optimization
problems [0.0]
We develop a Chance Constraint Learning (CCL) methodology with a focus on mixed-integer linear optimization problems.
CCL makes use of linearizable machine learning models to estimate conditional quantiles of the learned variables.
An open-access software has been developed to be used by practitioners.
arXiv Detail & Related papers (2022-07-08T11:54:39Z) - Exploring Viable Algorithmic Options for Learning from Demonstration
(LfD): A Parameterized Complexity Approach [0.0]
In this paper, we show how such a systematic exploration of algorithmic options can be done using parameterized complexity analysis.
We show that none of our problems can be solved efficiently either in general or relative to a number of (often simultaneous) restrictions on environments, demonstrations, and policies.
arXiv Detail & Related papers (2022-05-10T15:54:06Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - The empirical duality gap of constrained statistical learning [115.23598260228587]
We study the study of constrained statistical learning problems, the unconstrained version of which are at the core of virtually all modern information processing.
We propose to tackle the constrained statistical problem overcoming its infinite dimensionality, unknown distributions, and constraints by leveraging finite dimensional parameterizations, sample averages, and duality theory.
We demonstrate the effectiveness and usefulness of this constrained formulation in a fair learning application.
arXiv Detail & Related papers (2020-02-12T19:12:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.