Runtime Monitoring of Dynamic Fairness Properties
- URL: http://arxiv.org/abs/2305.04699v1
- Date: Mon, 8 May 2023 13:32:23 GMT
- Title: Runtime Monitoring of Dynamic Fairness Properties
- Authors: Thomas A. Henzinger, Mahyar Karimi, Konstantin Kueffner, Kaushik
Mallik
- Abstract summary: A machine-learned system that is fair in static decision-making tasks may have biased societal impacts in the long-run.
While existing works try to identify and mitigate long-run biases through smart system design, we introduce techniques for monitoring fairness in real time.
Our goal is to build and deploy a monitor that will continuously observe a long sequence of events generated by the system in the wild.
- Score: 3.372200852710289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A machine-learned system that is fair in static decision-making tasks may
have biased societal impacts in the long-run. This may happen when the system
interacts with humans and feedback patterns emerge, reinforcing old biases in
the system and creating new biases. While existing works try to identify and
mitigate long-run biases through smart system design, we introduce techniques
for monitoring fairness in real time. Our goal is to build and deploy a monitor
that will continuously observe a long sequence of events generated by the
system in the wild, and will output, with each event, a verdict on how fair the
system is at the current point in time. The advantages of monitoring are
two-fold. Firstly, fairness is evaluated at run-time, which is important
because unfair behaviors may not be eliminated a priori, at design-time, due to
partial knowledge about the system and the environment, as well as
uncertainties and dynamic changes in the system and the environment, such as
the unpredictability of human behavior. Secondly, monitors are by design
oblivious to how the monitored system is constructed, which makes them suitable
to be used as trusted third-party fairness watchdogs. They function as
computationally lightweight statistical estimators, and their correctness
proofs rely on the rigorous analysis of the stochastic process that models the
assumptions about the underlying dynamics of the system. We show, both in
theory and experiments, how monitors can warn us (1) if a bank's credit policy
over time has created an unfair distribution of credit scores among the
population, and (2) if a resource allocator's allocation policy over time has
made unfair allocations. Our experiments demonstrate that the monitors
introduce very low overhead. We believe that runtime monitoring is an important
and mathematically rigorous new addition to the fairness toolbox.
Related papers
- Fairness and Bias Mitigation in Computer Vision: A Survey [61.01658257223365]
Computer vision systems are increasingly being deployed in high-stakes real-world applications.
There is a dire need to ensure that they do not propagate or amplify any discriminatory tendencies in historical or human-curated data.
This paper presents a comprehensive survey on fairness that summarizes and sheds light on ongoing trends and successes in the context of computer vision.
arXiv Detail & Related papers (2024-08-05T13:44:22Z) - Designing monitoring strategies for deployed machine learning
algorithms: navigating performativity through a causal lens [6.329470650220206]
The aim of this work is to highlight the relatively under-appreciated complexity of designing a monitoring strategy.
We consider an ML-based risk prediction algorithm for predicting unplanned readmissions.
Results from this case study emphasize the seemingly simple (and obvious) fact that not all monitoring systems are created equal.
arXiv Detail & Related papers (2023-11-20T00:15:16Z) - Monitoring Algorithmic Fairness under Partial Observations [3.790015813774933]
runtime verification techniques have been introduced to monitor the algorithmic fairness of deployed systems.
Previous monitoring techniques assume full observability of the states of the monitored system.
We extend fairness monitoring to systems modeled as partially observed Markov chains.
arXiv Detail & Related papers (2023-08-01T07:35:54Z) - Monitoring Algorithmic Fairness [3.372200852710289]
We present runtime verification of algorithmic fairness for systems whose models are unknown.
We introduce a specification language that can model many common algorithmic fairness properties.
We show how we can monitor if a bank is fair in giving loans to applicants from different social backgrounds, and if a college is fair in admitting students.
arXiv Detail & Related papers (2023-05-25T12:17:59Z) - Fairness in Forecasting of Observations of Linear Dynamical Systems [10.762748665074794]
We introduce two natural notions of fairness in time-series forecasting problems: fairness and instantaneous fairness.
We show globally convergent methods for optimisation of fairness-constrained learning problems.
Our results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate the efficacy of our methods.
arXiv Detail & Related papers (2022-09-12T14:32:12Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Towards Partial Monitoring: It is Always too Soon to Give Up [0.0]
This paper revises the notion of monitorability from a practical perspective.
We show how non-monitorable properties can still be used to generate partial monitors, which can partially check the properties.
arXiv Detail & Related papers (2021-10-25T01:55:05Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z) - Training-free Monocular 3D Event Detection System for Traffic
Surveillance [93.65240041833319]
Existing event detection systems are mostly learning-based and have achieved convincing performance when a large amount of training data is available.
In real-world scenarios, collecting sufficient labeled training data is expensive and sometimes impossible.
We propose a training-free monocular 3D event detection system for traffic surveillance.
arXiv Detail & Related papers (2020-02-01T04:42:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.