FairCanary: Rapid Continuous Explainable Fairness
- URL: http://arxiv.org/abs/2106.07057v4
- Date: Sun, 28 May 2023 16:20:54 GMT
- Title: FairCanary: Rapid Continuous Explainable Fairness
- Authors: Avijit Ghosh, Aalok Shanbhag, Christo Wilson
- Abstract summary: We present Quantile Demographic Drift (QDD), a novel model bias quantification metric.
QDD is ideal for continuous monitoring scenarios, does not suffer from the statistical limitations of conventional threshold-based bias metrics.
We incorporate QDD into a continuous model monitoring system, called FairCanary, that reuses existing explanations computed for each individual prediction.
- Score: 8.362098382773265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Systems that offer continuous model monitoring have emerged in response to
(1) well-documented failures of deployed Machine Learning (ML) and Artificial
Intelligence (AI) models and (2) new regulatory requirements impacting these
models. Existing monitoring systems continuously track the performance of
deployed ML models and compute feature importance (a.k.a. explanations) for
each prediction to help developers identify the root causes of emergent model
performance problems.
We present Quantile Demographic Drift (QDD), a novel model bias
quantification metric that uses quantile binning to measure differences in the
overall prediction distributions over subgroups. QDD is ideal for continuous
monitoring scenarios, does not suffer from the statistical limitations of
conventional threshold-based bias metrics, and does not require outcome labels
(which may not be available at runtime). We incorporate QDD into a continuous
model monitoring system, called FairCanary, that reuses existing explanations
computed for each individual prediction to quickly compute explanations for the
QDD bias metrics. This optimization makes FairCanary an order of magnitude
faster than previous work that has tried to generate feature-level bias
explanations.
Related papers
- Root Causing Prediction Anomalies Using Explainable AI [3.970146574042422]
We present a novel application of explainable AI (XAI) for root-causing performance degradation in machine learning models.
A single feature corruption can cause cascading feature, label and concept drifts.
We have successfully applied this technique to improve the reliability of models used in personalized advertising.
arXiv Detail & Related papers (2024-03-04T19:38:50Z) - MLAD: A Unified Model for Multi-system Log Anomaly Detection [35.68387377240593]
We propose MLAD, a novel anomaly detection model that incorporates semantic relational reasoning across multiple systems.
Specifically, we employ Sentence-bert to capture the similarities between log sequences and convert them into highly-dimensional learnable semantic vectors.
We revamp the formulas of the Attention layer to discern the significance of each keyword in the sequence and model the overall distribution of the multi-system dataset.
arXiv Detail & Related papers (2024-01-15T12:51:13Z) - A hybrid feature learning approach based on convolutional kernels for
ATM fault prediction using event-log data [5.859431341476405]
We present a predictive model based on a convolutional kernel (MiniROCKET and HYDRA) to extract features from event-log data.
The proposed methodology is applied to a significant real-world collected dataset.
The model was integrated into a container-based decision support system to support operators in the timely maintenance of ATMs.
arXiv Detail & Related papers (2023-05-17T08:55:53Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv Detail & Related papers (2023-03-28T14:27:27Z) - VisFIS: Visual Feature Importance Supervision with
Right-for-the-Right-Reason Objectives [84.48039784446166]
We show that model FI supervision can meaningfully improve VQA model accuracy as well as performance on several Right-for-the-Right-Reason metrics.
Our best performing method, Visual Feature Importance Supervision (VisFIS), outperforms strong baselines on benchmark VQA datasets.
Predictions are more accurate when explanations are plausible and faithful, and not when they are plausible but not faithful.
arXiv Detail & Related papers (2022-06-22T17:02:01Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Complex Event Forecasting with Prediction Suffix Trees: Extended
Technical Report [70.7321040534471]
Complex Event Recognition (CER) systems have become popular in the past two decades due to their ability to "instantly" detect patterns on real-time streams of events.
There is a lack of methods for forecasting when a pattern might occur before such an occurrence is actually detected by a CER engine.
We present a formal framework that attempts to address the issue of Complex Event Forecasting.
arXiv Detail & Related papers (2021-09-01T09:52:31Z) - Partially Hidden Markov Chain Linear Autoregressive model: inference and
forecasting [0.0]
Time series subject to change in regime have attracted much interest in domains such as econometry, finance or meteorology.
We present a novel model which addresses the intermediate case: (i) state processes associated to such time series are modelled by Partially Hidden Markov Chains (PHMCs)
We propose a hidden state inference procedure and a forecasting function that take into account the observed states when existing.
arXiv Detail & Related papers (2021-02-24T22:12:05Z) - FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale
Generation [19.73842483996047]
We develop FiD-Ex, which addresses shortcomings for seq2seq models by introducing sentence markers to eliminate explanation fabrication.
FiD-Ex significantly improves over prior work in terms of explanation metrics and task accuracy, on multiple tasks from the ERASER explainability benchmark.
arXiv Detail & Related papers (2020-12-31T07:22:15Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.