Constrained Learning with Non-Convex Losses
- URL: http://arxiv.org/abs/2103.05134v1
- Date: Mon, 8 Mar 2021 23:10:33 GMT
- Title: Constrained Learning with Non-Convex Losses
- Authors: Luiz F. O. Chamon and Santiago Paternain and Miguel Calvo-Fullana and
Alejandro Ribeiro
- Abstract summary: Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
- Score: 119.8736858597118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though learning has become a core technology of modern information
processing, there is now ample evidence that it can lead to biased, unsafe, and
prejudiced solutions. The need to impose requirements on learning is therefore
paramount, especially as it reaches critical applications in social,
industrial, and medical domains. However, the non-convexity of most modern
learning problems is only exacerbated by the introduction of constraints.
Whereas good unconstrained solutions can often be learned using empirical risk
minimization (ERM), even obtaining a model that satisfies statistical
constraints can be challenging, all the more so a good one. In this paper, we
overcome this issue by learning in the empirical dual domain, where constrained
statistical learning problems become unconstrained, finite dimensional, and
deterministic. We analyze the generalization properties of this approach by
bounding the empirical duality gap, i.e., the difference between our
approximate, tractable solution and the solution of the original
(non-convex)~statistical problem, and provide a practical constrained learning
algorithm. These results establish a constrained counterpart of classical
learning theory and enable the explicit use of constraints in learning. We
illustrate this algorithm and theory in rate-constrained learning applications.
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - On data-driven chance constraint learning for mixed-integer optimization
problems [0.0]
We develop a Chance Constraint Learning (CCL) methodology with a focus on mixed-integer linear optimization problems.
CCL makes use of linearizable machine learning models to estimate conditional quantiles of the learned variables.
An open-access software has been developed to be used by practitioners.
arXiv Detail & Related papers (2022-07-08T11:54:39Z) - Efficient Performance Bounds for Primal-Dual Reinforcement Learning from
Demonstrations [1.0609815608017066]
We consider large-scale Markov decision processes with an unknown cost function and address the problem of learning a policy from a finite set of expert demonstrations.
Existing inverse reinforcement learning methods come with strong theoretical guarantees, but are computationally expensive.
We introduce a novel bilinear saddle-point framework using Lagrangian duality to bridge the gap between theory and practice.
arXiv Detail & Related papers (2021-12-28T05:47:24Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z) - The empirical duality gap of constrained statistical learning [115.23598260228587]
We study the study of constrained statistical learning problems, the unconstrained version of which are at the core of virtually all modern information processing.
We propose to tackle the constrained statistical problem overcoming its infinite dimensionality, unknown distributions, and constraints by leveraging finite dimensional parameterizations, sample averages, and duality theory.
We demonstrate the effectiveness and usefulness of this constrained formulation in a fair learning application.
arXiv Detail & Related papers (2020-02-12T19:12:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.