Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees
- URL: http://arxiv.org/abs/2301.06195v1
- Date: Sun, 15 Jan 2023 21:41:40 GMT
- Title: Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees
- Authors: Songkai Xue, Yuekai Sun, Mikhail Yurochkin
- Abstract summary: We consider the task of training machine learning models with data-dependent constraints.
We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability.
- Score: 46.94549066382216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the task of training machine learning models with data-dependent
constraints. Such constraints often arise as empirical versions of expected
value constraints that enforce fairness or stability goals. We reformulate
data-dependent constraints so that they are calibrated: enforcing the
reformulated constraints guarantees that their expected value counterparts are
satisfied with a user-prescribed probability. The resulting optimization
problem is amendable to standard stochastic optimization algorithms, and we
demonstrate the efficacy of our method on a fairness-sensitive classification
task where we wish to guarantee the classifier's fairness (at test time).
Related papers
- Automatically Adaptive Conformal Risk Control [49.95190019041905]
We propose a methodology for achieving approximate conditional control of statistical risks by adapting to the difficulty of test samples.
Our framework goes beyond traditional conditional risk control based on user-provided conditioning events to the algorithmic, data-driven determination of appropriate function classes for conditioning.
arXiv Detail & Related papers (2024-06-25T08:29:32Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Achievable Fairness on Your Data With Utility Guarantees [16.78730663293352]
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy.
We present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets.
We introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness.
arXiv Detail & Related papers (2024-02-27T00:59:32Z) - Resilient Constrained Reinforcement Learning [87.4374430686956]
We study a class of constrained reinforcement learning (RL) problems in which multiple constraint specifications are not identified before study.
It is challenging to identify appropriate constraint specifications due to the undefined trade-off between the reward training objective and the constraint satisfaction.
We propose a new constrained RL approach that searches for policy and constraint specifications together.
arXiv Detail & Related papers (2023-12-28T18:28:23Z) - Fair Active Learning in Low-Data Regimes [22.349886628823125]
In machine learning applications, ensuring fairness is essential to avoid perpetuating social inequities.
In this work, we address the challenges of reducing bias and improving accuracy in data-scarce environments.
We introduce an innovative active learning framework that combines an exploration procedure inspired by posterior sampling with a fair classification subroutine.
We demonstrate that this framework performs effectively in very data-scarce regimes, maximizing accuracy while satisfying fairness constraints with high probability.
arXiv Detail & Related papers (2023-12-13T23:14:55Z) - Learning From Scenarios for Stochastic Repairable Scheduling [3.9948520633731026]
We show how decision-focused learning techniques based on smoothing can be adapted to a scheduling problem.
We include an experimental evaluation to investigate in which situations decision-focused learning outperforms the state of the art for such situations: scenario-based optimization.
arXiv Detail & Related papers (2023-12-06T13:32:17Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - SLIDE: a surrogate fairness constraint to ensure fairness consistency [1.3649494534428745]
We propose a new surrogate fairness constraint called SLIDE, which is feasible and achieves a fast convergence rate.
Numerical experiments confirm that SLIDE works well for various benchmark datasets.
arXiv Detail & Related papers (2022-02-07T13:50:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.