FADE: FAir Double Ensemble Learning for Observable and Counterfactual
Outcomes
- URL: http://arxiv.org/abs/2109.00173v1
- Date: Wed, 1 Sep 2021 03:56:43 GMT
- Title: FADE: FAir Double Ensemble Learning for Observable and Counterfactual
Outcomes
- Authors: Alan Mishler, Edward Kennedy
- Abstract summary: Methods for building fair predictors often involve tradeoffs between fairness and accuracy and between different fairness criteria.
We develop a flexible framework for fair ensemble learning that allows users to efficiently explore the fairness-accuracy space.
We show that, surprisingly, multiple unfairness measures can sometimes be minimized simultaneously with little impact on accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Methods for building fair predictors often involve tradeoffs between fairness
and accuracy and between different fairness criteria, but the nature of these
tradeoffs varies. Recent work seeks to characterize these tradeoffs in specific
problem settings, but these methods often do not accommodate users who wish to
improve the fairness of an existing benchmark model without sacrificing
accuracy, or vice versa. These results are also typically restricted to
observable accuracy and fairness criteria. We develop a flexible framework for
fair ensemble learning that allows users to efficiently explore the
fairness-accuracy space or to improve the fairness or accuracy of a benchmark
model. Our framework can simultaneously target multiple observable or
counterfactual fairness criteria, and it enables users to combine a large
number of previously trained and newly trained predictors. We provide
theoretical guarantees that our estimators converge at fast rates. We apply our
method on both simulated and real data, with respect to both observable and
counterfactual accuracy and fairness criteria. We show that, surprisingly,
multiple unfairness measures can sometimes be minimized simultaneously with
little impact on accuracy, relative to unconstrained predictors or existing
benchmark models.
Related papers
- Achievable Fairness on Your Data With Utility Guarantees [16.78730663293352]
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy.
We present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets.
We introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness.
arXiv Detail & Related papers (2024-02-27T00:59:32Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning [0.8563354084119061]
Optimising one or several fairness measures may sacrifice or deteriorate other measures.
A multi-objective evolutionary learning framework is used to simultaneously optimise several metrics.
Our proposed algorithm can provide decision-makers with better tradeoffs among accuracy and multiple fairness metrics.
arXiv Detail & Related papers (2022-10-30T06:34:10Z) - FEAMOE: Fair, Explainable and Adaptive Mixture of Experts [9.665417053344614]
We propose FEAMOE, a "mixture-of-experts" inspired framework aimed at learning fairer, more explainable/interpretable models.
We show that our framework as applied to a mixture of linear experts is able to perform comparably to neural networks in terms of accuracy while producing fairer models.
We also prove that the proposed framework allows for producing fast Shapley value explanations.
arXiv Detail & Related papers (2022-10-10T20:02:02Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for
Hate Speech Detection [8.841221697099687]
We introduce a differentiable measure that enables direct optimization of group fairness in model training.
We evaluate our methods on the specific task of hate speech detection.
Empirical results across convolutional, sequential, and transformer-based neural architectures show superior empirical accuracy vs. fairness trade-offs over prior work.
arXiv Detail & Related papers (2022-04-15T22:11:25Z) - Learning Optimal Fair Classification Trees: Trade-offs Between
Interpretability, Fairness, and Accuracy [7.215903549622416]
We propose a mixed integer optimization framework for learning optimal classification trees.
We benchmark our method against state-of-the-art approaches for fair classification on popular datasets.
Our method consistently finds decisions with almost full parity, while other methods rarely do.
arXiv Detail & Related papers (2022-01-24T19:47:10Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.