SLIDE: a surrogate fairness constraint to ensure fairness consistency
- URL: http://arxiv.org/abs/2202.03165v1
- Date: Mon, 7 Feb 2022 13:50:21 GMT
- Title: SLIDE: a surrogate fairness constraint to ensure fairness consistency
- Authors: Kunwoong Kim, Ilsang Ohn, Sara Kim, and Yongdai Kim
- Abstract summary: We propose a new surrogate fairness constraint called SLIDE, which is feasible and achieves a fast convergence rate.
Numerical experiments confirm that SLIDE works well for various benchmark datasets.
- Score: 1.3649494534428745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As they have a vital effect on social decision makings, AI algorithms should
be not only accurate and but also fair. Among various algorithms for fairness
AI, learning a prediction model by minimizing the empirical risk (e.g.,
cross-entropy) subject to a given fairness constraint has received much
attention. To avoid computational difficulty, however, a given fairness
constraint is replaced by a surrogate fairness constraint as the 0-1 loss is
replaced by a convex surrogate loss for classification problems. In this paper,
we investigate the validity of existing surrogate fairness constraints and
propose a new surrogate fairness constraint called SLIDE, which is
computationally feasible and asymptotically valid in the sense that the learned
model satisfies the fairness constraint asymptotically and achieves a fast
convergence rate. Numerical experiments confirm that the SLIDE works well for
various benchmark datasets.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Flexible Fairness-Aware Learning via Inverse Conditional Permutation [0.0]
We introduce an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme.
We show that FairICP offers a theoretically justified, flexible, and efficient scheme to promote equalized odds under fairness conditions described by complex and multidimensional sensitive attributes.
arXiv Detail & Related papers (2024-04-08T16:57:44Z) - Understanding Fairness Surrogate Functions in Algorithmic Fairness [21.555040357521907]
We show that there is a surrogate-fairness gap between the fairness definition and the fairness surrogate function.
We elaborate a novel and general algorithm called Balanced Surrogate, which iteratively reduces the gap to mitigate unfairness.
arXiv Detail & Related papers (2023-10-17T12:40:53Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Group Fairness with Uncertainty in Sensitive Attributes [34.608332397776245]
A fair predictive model is crucial to mitigate biased decisions against minority groups in high-stakes applications.
We propose a bootstrap-based algorithm that achieves the target level of fairness despite the uncertainty in sensitive attributes.
Our algorithm is applicable to both discrete and continuous sensitive attributes and is effective in real-world classification and regression tasks.
arXiv Detail & Related papers (2023-02-16T04:33:00Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees [46.94549066382216]
We consider the task of training machine learning models with data-dependent constraints.
We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability.
arXiv Detail & Related papers (2023-01-15T21:41:40Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Identifying Best Fair Intervention [7.563864405505623]
We study the problem of best arm identification with a fairness constraint in a given causal model.
The problem is motivated by ensuring fairness on an online marketplace.
arXiv Detail & Related papers (2021-11-08T04:36:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.