Using Property Elicitation to Understand the Impacts of Fairness
Regularizers
- URL: http://arxiv.org/abs/2309.11343v2
- Date: Mon, 18 Dec 2023 16:12:07 GMT
- Title: Using Property Elicitation to Understand the Impacts of Fairness
Regularizers
- Authors: Jessie Finocchiaro
- Abstract summary: We show that it is not well-understood which regularizers change the minimizer of the loss, and, when the minimizer does change, how it changes.
We empirically demonstrate how algorithmic decision-making changes as a function of both data distribution changes and hardness of the constraints.
- Score: 0.32634122554914
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive algorithms are often trained by optimizing some loss function, to
which regularization functions are added to impose a penalty for violating
constraints. As expected, the addition of such regularization functions can
change the minimizer of the objective. It is not well-understood which
regularizers change the minimizer of the loss, and, when the minimizer does
change, how it changes. We use property elicitation to take first steps towards
understanding the joint relationship between the loss and regularization
functions and the optimal decision for a given problem instance. In particular,
we give a necessary and sufficient condition on loss and regularizer pairs for
when a property changes with the addition of the regularizer, and examine some
regularizers satisfying this condition standard in the fair machine learning
literature. We empirically demonstrate how algorithmic decision-making changes
as a function of both data distribution changes and hardness of the
constraints.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - On User-Level Private Convex Optimization [59.75368670035683]
We introduce a new mechanism for convex optimization (SCO) with user-level differential privacy guarantees.
Our mechanism does not require any smoothness assumptions on the loss.
Our bounds are the first where the minimum number of users needed for user-level privacy has no dependence on the dimension.
arXiv Detail & Related papers (2023-05-08T17:47:28Z) - Task Adaptive Feature Transformation for One-Shot Learning [21.20683465652298]
We introduce a simple non-linear embedding adaptation layer, which is fine-tuned on top of fixed pre-trained features for one-shot tasks.
We show consistent improvements over a variety of one-shot benchmarks, outperforming recent state-of-the-art methods.
arXiv Detail & Related papers (2023-04-13T21:52:51Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Calibrating Segmentation Networks with Margin-based Label Smoothing [19.669173092632]
We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses.
These losses could be viewed as approximations of a linear penalty imposing equality constraints on logit distances.
We propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances.
arXiv Detail & Related papers (2022-09-09T20:21:03Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Topological Regularization via Persistence-Sensitive Optimization [10.29838087001588]
A key tool in machine learning and statistics, regularization relies on regularization to reduce overfitting.
We propose a method that builds on persistence-sensitive simplification and translates required changes to the persistence diagram into changes on large subsets of the domain.
This approach enables a faster and more precise topological regularization, the benefits of which we illustrate.
arXiv Detail & Related papers (2020-11-10T18:19:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.