Monitoring Algorithmic Fairness under Partial Observations
- URL: http://arxiv.org/abs/2308.00341v1
- Date: Tue, 1 Aug 2023 07:35:54 GMT
- Title: Monitoring Algorithmic Fairness under Partial Observations
- Authors: Thomas A. Henzinger, Konstantin Kueffner, Kaushik Mallik
- Abstract summary: runtime verification techniques have been introduced to monitor the algorithmic fairness of deployed systems.
Previous monitoring techniques assume full observability of the states of the monitored system.
We extend fairness monitoring to systems modeled as partially observed Markov chains.
- Score: 3.790015813774933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI and machine-learned software are used increasingly for making decisions
that affect humans, it is imperative that they remain fair and unbiased in
their decisions. To complement design-time bias mitigation measures, runtime
verification techniques have been introduced recently to monitor the
algorithmic fairness of deployed systems. Previous monitoring techniques assume
full observability of the states of the (unknown) monitored system. Moreover,
they can monitor only fairness properties that are specified as arithmetic
expressions over the probabilities of different events. In this work, we extend
fairness monitoring to systems modeled as partially observed Markov chains
(POMC), and to specifications containing arithmetic expressions over the
expected values of numerical functions on event sequences. The only assumptions
we make are that the underlying POMC is aperiodic and starts in the stationary
distribution, with a bound on its mixing time being known. These assumptions
enable us to estimate a given property for the entire distribution of possible
executions of the monitored POMC, by observing only a single execution. Our
monitors observe a long run of the system and, after each new observation,
output updated PAC-estimates of how fair or biased the system is. The monitors
are computationally lightweight and, using a prototype implementation, we
demonstrate their effectiveness on several real-world examples.
Related papers
- Designing monitoring strategies for deployed machine learning
algorithms: navigating performativity through a causal lens [6.329470650220206]
The aim of this work is to highlight the relatively under-appreciated complexity of designing a monitoring strategy.
We consider an ML-based risk prediction algorithm for predicting unplanned readmissions.
Results from this case study emphasize the seemingly simple (and obvious) fact that not all monitoring systems are created equal.
arXiv Detail & Related papers (2023-11-20T00:15:16Z) - Direct Unsupervised Denoising [60.71146161035649]
Unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate.
We present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency.
arXiv Detail & Related papers (2023-10-27T13:02:12Z) - CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection [53.83593870825628]
One main challenge in time series anomaly detection (TSAD) is the lack of labelled data in many real-life scenarios.
Most of the existing anomaly detection methods focus on learning the normal behaviour of unlabelled time series in an unsupervised manner.
We introduce a novel end-to-end self-supervised ContrAstive Representation Learning approach for time series anomaly detection.
arXiv Detail & Related papers (2023-08-18T04:45:56Z) - DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization [58.704753031608625]
Time series is one of the most challenging modalities in machine learning research.
OOD detection and generalization on time series tend to suffer due to its non-stationary property.
We propose DIVERSIFY, a framework for OOD detection and generalization on dynamic distributions of time series.
arXiv Detail & Related papers (2023-08-04T12:27:11Z) - Monitoring Algorithmic Fairness [3.372200852710289]
We present runtime verification of algorithmic fairness for systems whose models are unknown.
We introduce a specification language that can model many common algorithmic fairness properties.
We show how we can monitor if a bank is fair in giving loans to applicants from different social backgrounds, and if a college is fair in admitting students.
arXiv Detail & Related papers (2023-05-25T12:17:59Z) - Runtime Monitoring of Dynamic Fairness Properties [3.372200852710289]
A machine-learned system that is fair in static decision-making tasks may have biased societal impacts in the long-run.
While existing works try to identify and mitigate long-run biases through smart system design, we introduce techniques for monitoring fairness in real time.
Our goal is to build and deploy a monitor that will continuously observe a long sequence of events generated by the system in the wild.
arXiv Detail & Related papers (2023-05-08T13:32:23Z) - Sampling-Based Robust Control of Autonomous Systems with Non-Gaussian
Noise [59.47042225257565]
We present a novel planning method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous system into a discrete-state model that captures noise by probabilistic transitions between states.
We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP)
arXiv Detail & Related papers (2021-10-25T06:18:55Z) - Towards Partial Monitoring: It is Always too Soon to Give Up [0.0]
This paper revises the notion of monitorability from a practical perspective.
We show how non-monitorable properties can still be used to generate partial monitors, which can partially check the properties.
arXiv Detail & Related papers (2021-10-25T01:55:05Z) - FairCanary: Rapid Continuous Explainable Fairness [8.362098382773265]
We present Quantile Demographic Drift (QDD), a novel model bias quantification metric.
QDD is ideal for continuous monitoring scenarios, does not suffer from the statistical limitations of conventional threshold-based bias metrics.
We incorporate QDD into a continuous model monitoring system, called FairCanary, that reuses existing explanations computed for each individual prediction.
arXiv Detail & Related papers (2021-06-13T17:47:44Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - Foreseeing the Benefits of Incidental Supervision [83.08441990812636]
This paper studies whether we can, in a single framework, quantify the benefits of various types of incidental signals for a given target task without going through experiments.
We propose a unified PAC-Bayesian motivated informativeness measure, PABI, that characterizes the uncertainty reduction provided by incidental supervision signals.
arXiv Detail & Related papers (2020-06-09T20:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.