Fairness-enhancing mixed effects deep learning improves fairness on in-
and out-of-distribution clustered (non-iid) data
- URL: http://arxiv.org/abs/2310.03146v1
- Date: Wed, 4 Oct 2023 20:18:45 GMT
- Title: Fairness-enhancing mixed effects deep learning improves fairness on in-
and out-of-distribution clustered (non-iid) data
- Authors: Adam Wang, Son Nguyen, Albert Montillo
- Abstract summary: We present a mixed effects deep learning (MEDL) framework.
MEDL quantifies cluster-invariant fixed effects (FE) and cluster-specific random effects (RE)
We marry this MEDL with adversarial debiasing, which promotes equality-of-odds fairness across FE, RE, and ME predictions for fairness-sensitive variables.
Our framework notably enhances fairness across all sensitive variables-increasing fairness up to 82% for age, 43% for race, 86% for sex, and 27% for marital-status.
- Score: 7.413980562174725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional deep learning (DL) suffers from two core problems. Firstly, it
assumes training samples are independent and identically distributed. However,
numerous real-world datasets group samples by shared measurements (e.g., study
participants or cells), violating this assumption. In these scenarios, DL can
show compromised performance, limited generalization, and interpretability
issues, coupled with cluster confounding causing Type 1 and 2 errors. Secondly,
models are typically trained for overall accuracy, often neglecting
underrepresented groups and introducing biases in crucial areas like loan
approvals or determining health insurance rates, such biases can significantly
impact one's quality of life. To address both of these challenges
simultaneously, we present a mixed effects deep learning (MEDL) framework. MEDL
separately quantifies cluster-invariant fixed effects (FE) and cluster-specific
random effects (RE) through the introduction of: 1) a cluster adversary which
encourages the learning of cluster-invariant FE, 2) a Bayesian neural network
which quantifies the RE, and a mixing function combining the FE an RE into a
mixed-effect prediction. We marry this MEDL with adversarial debiasing, which
promotes equality-of-odds fairness across FE, RE, and ME predictions for
fairness-sensitive variables. We evaluated our approach using three datasets:
two from census/finance focusing on income classification and one from
healthcare predicting hospitalization duration, a regression task. Our
framework notably enhances fairness across all sensitive variables-increasing
fairness up to 82% for age, 43% for race, 86% for sex, and 27% for
marital-status. Besides promoting fairness, our method maintains the robust
performance and clarity of MEDL. It's versatile, suitable for various dataset
types and tasks, making it broadly applicable. Our GitHub repository houses the
implementation.
Related papers
- Feature-Wise Mixing for Mitigating Contextual Bias in Predictive Supervised Learning [0.0]
This paper introduces a feature-wise mixing framework to mitigate contextual bias.<n>It was done by redistributing feature representations across multiple contextual datasets.<n>It achieved an average bias reduction of 43.35% and a statistically significant decrease in mean squared error.
arXiv Detail & Related papers (2025-06-28T23:12:59Z) - Fair CoVariance Neural Networks [34.68621550644667]
We propose Fair coVariance Neural Networks (FVNNs), which perform graph convolutions on the covariance matrix for both fair and accurate predictions.
We prove that FVNNs are intrinsically fairer than analogous PCA approaches thanks to their stability in low sample regimes.
arXiv Detail & Related papers (2024-09-13T06:24:18Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Multi-dimensional Fair Federated Learning [25.07463977553212]
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data.
Group fairness and client fairness are two dimensions of fairness that are important for FL.
We propose a method, called mFairFL, to achieve group fairness and client fairness simultaneously.
arXiv Detail & Related papers (2023-12-09T11:37:30Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems [46.93320580613236]
We present a simple, yet effective method based on normalisation (FaiReg) for regression problems.
We compare it with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - Towards Multi-Objective Statistically Fair Federated Learning [1.2687030176231846]
Federated Learning (FL) has emerged as a result of data ownership and privacy concerns.
We propose a new FL framework that is able to satisfy multiple objectives including various statistical fairness metrics.
arXiv Detail & Related papers (2022-01-24T19:22:01Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Adversarial Learning for Counterfactual Fairness [15.302633901803526]
In recent years, fairness has become an important topic in the machine learning research community.
We propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties.
Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.
arXiv Detail & Related papers (2020-08-30T09:06:03Z) - Deep F-measure Maximization for End-to-End Speech Understanding [52.36496114728355]
We propose a differentiable approximation to the F-measure and train the network with this objective using standard backpropagation.
We perform experiments on two standard fairness datasets, Adult, Communities and Crime, and also on speech-to-intent detection on the ATIS dataset and speech-to-image concept classification on the Speech-COCO dataset.
In all four of these tasks, F-measure results in improved micro-F1 scores, with absolute improvements of up to 8% absolute, as compared to models trained with the cross-entropy loss function.
arXiv Detail & Related papers (2020-08-08T03:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.