Fair inference on error-prone outcomes
- URL: http://arxiv.org/abs/2003.07621v1
- Date: Tue, 17 Mar 2020 10:31:59 GMT
- Title: Fair inference on error-prone outcomes
- Authors: Laura Boeschoten, Erik-Jan van Kesteren, Ayoub Bagheri, Daniel L.
Oberski
- Abstract summary: We show that, when an error-prone proxy target is used, existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest.
We suggest a framework resulting from the combination of fair ML methods and measurement models found in the statistical literature.
In a healthcare decision problem, we find that using a latent variable model to account for measurement error removes the unfairness detected previously.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fair inference in supervised learning is an important and active area of
research, yielding a range of useful methods to assess and account for fairness
criteria when predicting ground truth targets. As shown in recent work,
however, when target labels are error-prone, potential prediction unfairness
can arise from measurement error. In this paper, we show that, when an
error-prone proxy target is used, existing methods to assess and calibrate
fairness criteria do not extend to the true target variable of interest. To
remedy this problem, we suggest a framework resulting from the combination of
two existing literatures: fair ML methods, such as those found in the
counterfactual fairness literature on the one hand, and, on the other,
measurement models found in the statistical literature. We discuss these
approaches and their connection resulting in our framework. In a healthcare
decision problem, we find that using a latent variable model to account for
measurement error removes the unfairness detected previously.
Related papers
- Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly [2.002741592555996]
Existing techniques for assessing the discrimination level of machine learning models include commonly used group and individual fairness measures.
We propose a "harmonic fairness measure via manifold (HFM)" based on distances between sets.
Empirical results indicate that the proposed fairness measure HFM is valid and that the proposed ApproxDist is effective and efficient.
arXiv Detail & Related papers (2024-05-15T11:07:40Z) - Uncertainty-based Fairness Measures [14.61416119202288]
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings.
We show that an ML model may appear to be fair with existing point-based fairness measures but biased against a demographic group in terms of prediction uncertainties.
arXiv Detail & Related papers (2023-12-18T15:49:03Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Counterfactual Fair Opportunity: Measuring Decision Model Fairness with
Counterfactual Reasoning [5.626570248105078]
This work aims to unveil unfair model behaviors using counterfactual reasoning in the case of fairness under unawareness setting.
A counterfactual version of equal opportunity named counterfactual fair opportunity is defined and two novel metrics that analyze the sensitive information of counterfactual samples are introduced.
arXiv Detail & Related papers (2023-02-16T09:13:53Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - A Call to Reflect on Evaluation Practices for Failure Detection in Image
Classification [0.491574468325115]
We present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions.
The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation.
arXiv Detail & Related papers (2022-11-28T12:25:27Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research [2.6397379133308214]
We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions.
While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness.
arXiv Detail & Related papers (2021-02-01T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.