The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance
Trade-Offs in the Context of Fair Prediction
- URL: http://arxiv.org/abs/2302.08704v1
- Date: Fri, 17 Feb 2023 05:34:35 GMT
- Title: The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance
Trade-Offs in the Context of Fair Prediction
- Authors: Falaah Arif Khan, Julia Stoyanovich
- Abstract summary: We propose a conditional-iid (ciid) model that seeks to improve on the trade-offs made by a single model.
We empirically test our setup on the COMPAS and folktables datasets.
Our analysis suggests that there might be principled procedures and concrete real-world use cases under which conditional models are preferred.
- Score: 7.975779552420981
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper we revisit the bias-variance decomposition of model error from
the perspective of designing a fair classifier: we are motivated by the widely
held socio-technical belief that noise variance in large datasets in social
domains tracks demographic characteristics such as gender, race, disability,
etc. We propose a conditional-iid (ciid) model built from group-specific
classifiers that seeks to improve on the trade-offs made by a single model (iid
setting). We theoretically analyze the bias-variance decomposition of different
models in the Gaussian Mixture Model, and then empirically test our setup on
the COMPAS and folktables datasets. We instantiate the ciid model with two
procedures that improve "fairness" by conditioning out undesirable effects:
first, by conditioning directly on sensitive attributes, and second, by
clustering samples into groups and conditioning on cluster membership (blind to
protected group membership).
Our analysis suggests that there might be principled procedures and concrete
real-world use cases under which conditional models are preferred, and our
striking empirical results strongly indicate that non-iid settings, such as the
ciid setting proposed here, might be more suitable for big data applications in
social contexts.
Related papers
- MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - On how to avoid exacerbating spurious correlations when models are
overparameterized [33.315813572333745]
We show that VS-loss learns a model that is fair towards minorities even when spurious features are strong.
Compared to previous works, our bounds hold for more general models, they are non-asymptotic, and, they apply even at scenarios of extreme imbalance.
arXiv Detail & Related papers (2022-06-25T21:53:44Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - An Investigation of Why Overparameterization Exacerbates Spurious
Correlations [98.3066727301239]
We identify two key properties of the training data that drive this behavior.
We show how the inductive bias of models towards "memorizing" fewer examples can cause over parameterization to hurt.
arXiv Detail & Related papers (2020-05-09T01:59:13Z) - Fairness by Explicability and Adversarial SHAP Learning [0.0]
We propose a new definition of fairness that emphasises the role of an external auditor and model explicability.
We develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
We demonstrate our approaches using gradient and adaptive boosting on: a synthetic dataset, the UCI Adult (Census) dataset and a real-world credit scoring dataset.
arXiv Detail & Related papers (2020-03-11T14:36:34Z) - Counterfactual fairness: removing direct effects through regularization [0.0]
We propose a new definition of fairness that incorporates causality through the Controlled Direct Effect (CDE)
We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition.
Our results were found to mitigate unfairness from the predictions with small reductions in model performance.
arXiv Detail & Related papers (2020-02-25T10:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.