Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy
- URL: http://arxiv.org/abs/2205.12327v1
- Date: Tue, 24 May 2022 19:14:21 GMT
- Title: Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy
- Authors: Limor Gultchin, Vincent Cohen-Addad, Sophie Giffard-Roisin, Varun
Kanade, Frederik Mallmann-Trenn
- Abstract summary: Tension between satisfying textitsufficiency and textitseparation
We propose an objective that aims to balance textitsufficiency and textitseparation measures.
We show promising results, where better trade-offs are achieved compared to existing alternatives.
- Score: 27.744055920557024
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Among the various aspects of algorithmic fairness studied in recent years,
the tension between satisfying both \textit{sufficiency} and
\textit{separation} -- e.g. the ratios of positive or negative predictive
values, and false positive or false negative rates across groups -- has
received much attention. Following a debate sparked by COMPAS, a criminal
justice predictive system, the academic community has responded by laying out
important theoretical understanding, showing that one cannot achieve both with
an imperfect predictor when there is no equal distribution of labels across the
groups. In this paper, we shed more light on what might be still possible
beyond the impossibility -- the existence of a trade-off means we should aim to
find a good balance within it. After refining the existing theoretical result,
we propose an objective that aims to balance \textit{sufficiency} and
\textit{separation} measures, while maintaining similar accuracy levels. We
show the use of such an objective in two empirical case studies, one involving
a multi-objective framework, and the other fine-tuning of a model pre-trained
for accuracy. We show promising results, where better trade-offs are achieved
compared to existing alternatives.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - A Theoretical Approach to Characterize the Accuracy-Fairness Trade-off
Pareto Frontier [42.18013955576355]
The accuracy-fairness trade-off has been frequently observed in the literature of fair machine learning.
This work seeks to develop a theoretical framework by characterizing the shape of the accuracy-fairness trade-off.
The proposed research enables an in-depth understanding of the accuracy-fairness trade-off, pushing current fair machine-learning research to a new frontier.
arXiv Detail & Related papers (2023-10-19T14:35:26Z) - Understanding Fairness Surrogate Functions in Algorithmic Fairness [21.555040357521907]
We show that there is a surrogate-fairness gap between the fairness definition and the fairness surrogate function.
We elaborate a novel and general algorithm called Balanced Surrogate, which iteratively reduces the gap to mitigate unfairness.
arXiv Detail & Related papers (2023-10-17T12:40:53Z) - Fairness under Covariate Shift: Improving Fairness-Accuracy tradeoff
with few Unlabeled Test Samples [21.144077993862652]
We operate in the unsupervised regime where only a small set of unlabeled test samples along with a labeled training set is available.
We experimentally verify that optimizing with our loss formulation outperforms a number of state-of-the-art baselines.
We show that our proposed method significantly outperforms them.
arXiv Detail & Related papers (2023-10-11T14:39:51Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - The Possibility of Fairness: Revisiting the Impossibility Theorem in
Practice [5.175941513195566]
We show that it is possible to identify a large set of models that satisfy seemingly incompatible fairness constraints.
We offer tools and guidance for practitioners to understand when -- and to what degree -- fairness along multiple criteria can be achieved.
arXiv Detail & Related papers (2023-02-13T13:29:24Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Balanced Q-learning: Combining the Influence of Optimistic and
Pessimistic Targets [74.04426767769785]
We show that specific types of biases may be preferable, depending on the scenario.
We design a novel reinforcement learning algorithm, Balanced Q-learning, in which the target is modified to be a convex combination of a pessimistic and an optimistic term.
arXiv Detail & Related papers (2021-11-03T07:30:19Z) - Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research [2.6397379133308214]
We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions.
While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness.
arXiv Detail & Related papers (2021-02-01T22:02:14Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.