Arbitrariness Lies Beyond the Fairness-Accuracy Frontier
- URL: http://arxiv.org/abs/2306.09425v1
- Date: Thu, 15 Jun 2023 18:15:46 GMT
- Title: Arbitrariness Lies Beyond the Fairness-Accuracy Frontier
- Authors: Carol Xuan Long, Hsiang Hsu, Wael Alghamdi, Flavio P. Calmon
- Abstract summary: We show that state-of-the-art fairness interventions can mask high predictive multiplicity behind favorable group fairness and accuracy metrics.
We propose an ensemble algorithm applicable to any fairness intervention that provably ensures more consistent predictions.
- Score: 3.383670923637875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning tasks may admit multiple competing models that achieve
similar performance yet produce conflicting outputs for individual samples -- a
phenomenon known as predictive multiplicity. We demonstrate that fairness
interventions in machine learning optimized solely for group fairness and
accuracy can exacerbate predictive multiplicity. Consequently, state-of-the-art
fairness interventions can mask high predictive multiplicity behind favorable
group fairness and accuracy metrics. We argue that a third axis of
``arbitrariness'' should be considered when deploying models to aid
decision-making in applications of individual-level impact. To address this
challenge, we propose an ensemble algorithm applicable to any fairness
intervention that provably ensures more consistent predictions.
Related papers
- Conformal Prediction Sets Can Cause Disparate Impact [4.61590049339329]
Conformal prediction is a promising method for quantifying the uncertainty of machine learning models.
We show that providing prediction sets can increase the unfairness of their decisions.
Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to more fair outcomes.
arXiv Detail & Related papers (2024-10-02T18:00:01Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - FADE: FAir Double Ensemble Learning for Observable and Counterfactual
Outcomes [0.0]
Methods for building fair predictors often involve tradeoffs between fairness and accuracy and between different fairness criteria.
We develop a flexible framework for fair ensemble learning that allows users to efficiently explore the fairness-accuracy space.
We show that, surprisingly, multiple unfairness measures can sometimes be minimized simultaneously with little impact on accuracy.
arXiv Detail & Related papers (2021-09-01T03:56:43Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Adversarial Learning for Counterfactual Fairness [15.302633901803526]
In recent years, fairness has become an important topic in the machine learning research community.
We propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties.
Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.
arXiv Detail & Related papers (2020-08-30T09:06:03Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic
Multi-Objective Approach [0.0]
In the application of machine learning to real-life decision-making systems, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness.
The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction loss.
In this paper, we introduce a new approach to handle fairness by formulating a multi-objective optimization problem.
arXiv Detail & Related papers (2020-08-03T18:51:24Z) - On Adversarial Bias and the Robustness of Fair Machine Learning [11.584571002297217]
We show that giving the same importance to groups of different sizes and distributions, to counteract the effect of bias in training data, can be in conflict with robustness.
An adversary who can control sampling or labeling for a fraction of training data, can reduce the test accuracy significantly beyond what he can achieve on unconstrained models.
We analyze the robustness of fair machine learning through an empirical evaluation of attacks on multiple algorithms and benchmark datasets.
arXiv Detail & Related papers (2020-06-15T18:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.