Monitoring Algorithmic Fairness
- URL: http://arxiv.org/abs/2305.15979v1
- Date: Thu, 25 May 2023 12:17:59 GMT
- Title: Monitoring Algorithmic Fairness
- Authors: Thomas A. Henzinger, Mahyar Karimi, Konstantin Kueffner, Kaushik
Mallik
- Abstract summary: We present runtime verification of algorithmic fairness for systems whose models are unknown.
We introduce a specification language that can model many common algorithmic fairness properties.
We show how we can monitor if a bank is fair in giving loans to applicants from different social backgrounds, and if a college is fair in admitting students.
- Score: 3.372200852710289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine-learned systems are in widespread use for making decisions about
humans, and it is important that they are fair, i.e., not biased against
individuals based on sensitive attributes. We present runtime verification of
algorithmic fairness for systems whose models are unknown, but are assumed to
have a Markov chain structure. We introduce a specification language that can
model many common algorithmic fairness properties, such as demographic parity,
equal opportunity, and social burden. We build monitors that observe a long
sequence of events as generated by a given system, and output, after each
observation, a quantitative estimate of how fair or biased the system was on
that run until that point in time. The estimate is proven to be correct modulo
a variable error bound and a given confidence level, where the error bound gets
tighter as the observed sequence gets longer. Our monitors are of two types,
and use, respectively, frequentist and Bayesian statistical inference
techniques. While the frequentist monitors compute estimates that are
objectively correct with respect to the ground truth, the Bayesian monitors
compute estimates that are correct subject to a given prior belief about the
system's model. Using a prototype implementation, we show how we can monitor if
a bank is fair in giving loans to applicants from different social backgrounds,
and if a college is fair in admitting students while maintaining a reasonable
financial burden on the society. Although they exhibit different theoretical
complexities in certain cases, in our experiments, both frequentist and
Bayesian monitors took less than a millisecond to update their verdicts after
each observation.
Related papers
- Comprehensive Equity Index (CEI): Definition and Application to Bias Evaluation in Biometrics [47.762333925222926]
We present a novel metric to quantify biased behaviors of machine learning models.
We focus on and apply it to the operational evaluation of face recognition systems.
arXiv Detail & Related papers (2024-09-03T14:19:38Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Monitoring Algorithmic Fairness under Partial Observations [3.790015813774933]
runtime verification techniques have been introduced to monitor the algorithmic fairness of deployed systems.
Previous monitoring techniques assume full observability of the states of the monitored system.
We extend fairness monitoring to systems modeled as partially observed Markov chains.
arXiv Detail & Related papers (2023-08-01T07:35:54Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Runtime Monitoring of Dynamic Fairness Properties [3.372200852710289]
A machine-learned system that is fair in static decision-making tasks may have biased societal impacts in the long-run.
While existing works try to identify and mitigate long-run biases through smart system design, we introduce techniques for monitoring fairness in real time.
Our goal is to build and deploy a monitor that will continuously observe a long sequence of events generated by the system in the wild.
arXiv Detail & Related papers (2023-05-08T13:32:23Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Fairness in Forecasting of Observations of Linear Dynamical Systems [10.762748665074794]
We introduce two natural notions of fairness in time-series forecasting problems: fairness and instantaneous fairness.
We show globally convergent methods for optimisation of fairness-constrained learning problems.
Our results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate the efficacy of our methods.
arXiv Detail & Related papers (2022-09-12T14:32:12Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - FairCanary: Rapid Continuous Explainable Fairness [8.362098382773265]
We present Quantile Demographic Drift (QDD), a novel model bias quantification metric.
QDD is ideal for continuous monitoring scenarios, does not suffer from the statistical limitations of conventional threshold-based bias metrics.
We incorporate QDD into a continuous model monitoring system, called FairCanary, that reuses existing explanations computed for each individual prediction.
arXiv Detail & Related papers (2021-06-13T17:47:44Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.