FEAMOE: Fair, Explainable and Adaptive Mixture of Experts
- URL: http://arxiv.org/abs/2210.04995v1
- Date: Mon, 10 Oct 2022 20:02:02 GMT
- Title: FEAMOE: Fair, Explainable and Adaptive Mixture of Experts
- Authors: Shubham Sharma, Jette Henderson, Joydeep Ghosh
- Abstract summary: We propose FEAMOE, a "mixture-of-experts" inspired framework aimed at learning fairer, more explainable/interpretable models.
We show that our framework as applied to a mixture of linear experts is able to perform comparably to neural networks in terms of accuracy while producing fairer models.
We also prove that the proposed framework allows for producing fast Shapley value explanations.
- Score: 9.665417053344614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Three key properties that are desired of trustworthy machine learning models
deployed in high-stakes environments are fairness, explainability, and an
ability to account for various kinds of "drift". While drifts in model
accuracy, for example due to covariate shift, have been widely investigated,
drifts in fairness metrics over time remain largely unexplored. In this paper,
we propose FEAMOE, a novel "mixture-of-experts" inspired framework aimed at
learning fairer, more explainable/interpretable models that can also rapidly
adjust to drifts in both the accuracy and the fairness of a classifier. We
illustrate our framework for three popular fairness measures and demonstrate
how drift can be handled with respect to these fairness constraints.
Experiments on multiple datasets show that our framework as applied to a
mixture of linear experts is able to perform comparably to neural networks in
terms of accuracy while producing fairer models. We then use the large-scale
HMDA dataset and show that while various models trained on HMDA demonstrate
drift with respect to both accuracy and fairness, FEAMOE can ably handle these
drifts with respect to all the considered fairness measures and maintain model
accuracy as well. We also prove that the proposed framework allows for
producing fast Shapley value explanations, which makes computationally
efficient feature attribution based explanations of model decisions readily
available via FEAMOE.
Related papers
- Enhancing Fairness in Neural Networks Using FairVIC [0.0]
Mitigating bias in automated decision-making systems, specifically deep learning models, is a critical challenge in achieving fairness.
We introduce FairVIC, an innovative approach designed to enhance fairness in neural networks by addressing inherent biases at the training stage.
We observe a significant improvement in fairness across all metrics tested, without compromising the model's accuracy to a detrimental extent.
arXiv Detail & Related papers (2024-04-28T10:10:21Z) - Achievable Fairness on Your Data With Utility Guarantees [16.78730663293352]
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy.
We present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets.
We introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness.
arXiv Detail & Related papers (2024-02-27T00:59:32Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - FADE: FAir Double Ensemble Learning for Observable and Counterfactual
Outcomes [0.0]
Methods for building fair predictors often involve tradeoffs between fairness and accuracy and between different fairness criteria.
We develop a flexible framework for fair ensemble learning that allows users to efficiently explore the fairness-accuracy space.
We show that, surprisingly, multiple unfairness measures can sometimes be minimized simultaneously with little impact on accuracy.
arXiv Detail & Related papers (2021-09-01T03:56:43Z) - MixKD: Towards Efficient Distillation of Large-scale Language Models [129.73786264834894]
We propose MixKD, a data-agnostic distillation framework, to endow the resulting model with stronger generalization ability.
We prove from a theoretical perspective that under reasonable conditions MixKD gives rise to a smaller gap between the error and the empirical error.
Experiments under a limited-data setting and ablation studies further demonstrate the advantages of the proposed approach.
arXiv Detail & Related papers (2020-11-01T18:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.