Estimating and Controlling for Equalized Odds via Sensitive Attribute
Predictors
- URL: http://arxiv.org/abs/2207.12497v4
- Date: Fri, 9 Jun 2023 00:06:23 GMT
- Title: Estimating and Controlling for Equalized Odds via Sensitive Attribute
Predictors
- Authors: Beepul Bharti, Paul Yi, Jeremias Sulam
- Abstract summary: We study the well known emphequalized odds (EOD) definition of fairness.
In a setting without sensitive attributes, we first provide tight and computable upper bounds for the EOD violation of a predictor.
We demonstrate how one can provably control the worst-case EOD by a new post-processing correction method.
- Score: 7.713240800142863
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the use of machine learning models in real world high-stakes decision
settings continues to grow, it is highly important that we are able to audit
and control for any potential fairness violations these models may exhibit
towards certain groups. To do so, one naturally requires access to sensitive
attributes, such as demographics, gender, or other potentially sensitive
features that determine group membership. Unfortunately, in many settings, this
information is often unavailable. In this work we study the well known
\emph{equalized odds} (EOD) definition of fairness. In a setting without
sensitive attributes, we first provide tight and computable upper bounds for
the EOD violation of a predictor. These bounds precisely reflect the worst
possible EOD violation. Second, we demonstrate how one can provably control the
worst-case EOD by a new post-processing correction method. Our results
characterize when directly controlling for EOD with respect to the predicted
sensitive attributes is -- and when is not -- optimal when it comes to
controlling worst-case EOD. Our results hold under assumptions that are milder
than previous works, and we illustrate these results with experiments on
synthetic and real datasets.
Related papers
- Hi-ALPS -- An Experimental Robustness Quantification of Six LiDAR-based Object Detection Systems for Autonomous Driving [49.64902130083662]
3D object detection systems (OD) play a key role in the driving decisions of autonomous vehicles.
Adversarial examples are small, sometimes sophisticated perturbations in the input data that change, i.e. falsify, the prediction of the OD.
We quantify the robustness of six state-of-the-art 3D OD under different types of perturbations.
arXiv Detail & Related papers (2025-03-21T14:17:02Z) - Resultant: Incremental Effectiveness on Likelihood for Unsupervised Out-of-Distribution Detection [63.93728560200819]
Unsupervised out-of-distribution (U-OOD) detection is to identify data samples with a detector trained solely on unlabeled in-distribution (ID) data.
Recent studies have developed various detectors based on DGMs to move beyond likelihood.
We apply two techniques for each direction, specifically post-hoc prior and dataset entropy-mutual calibration.
Experimental results demonstrate that the Resultant could be a new state-of-the-art U-OOD detector.
arXiv Detail & Related papers (2024-09-05T02:58:13Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness
under Unawareness setting [6.004889078682389]
Current AI regulations require discarding sensitive features in the algorithm's decision-making process to prevent unfair outcomes.
We propose a way to reveal the potential hidden bias of a machine learning model that can persist even when sensitive features are discarded.
arXiv Detail & Related papers (2023-02-16T10:36:18Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - Mitigating Algorithmic Bias with Limited Annotations [65.060639928772]
When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias.
We propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias.
APOD shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.
arXiv Detail & Related papers (2022-07-20T16:31:19Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Better sampling in explanation methods can prevent dieselgate-like
deception [0.0]
Interpretability of prediction models is necessary to determine their biases and causes of errors.
Popular techniques, such as IME, LIME, and SHAP, use perturbation of instance features to explain individual predictions.
We show that the improved sampling increases the robustness of the LIME and SHAP, while previously untested method IME is already the most robust of all.
arXiv Detail & Related papers (2021-01-26T13:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.