ESCAPE: Countering Systematic Errors from Machine's Blind Spots via
Interactive Visual Analysis
- URL: http://arxiv.org/abs/2303.09657v1
- Date: Thu, 16 Mar 2023 21:29:50 GMT
- Title: ESCAPE: Countering Systematic Errors from Machine's Blind Spots via
Interactive Visual Analysis
- Authors: Yongsu Ahn, Yu-Ru Lin, Panpan Xu, Zeng Dai
- Abstract summary: We propose ESCAPE, a visual analytic system that promotes a human-in-the-loop workflow for countering systematic errors.
By allowing human users to easily inspect spurious associations, the system facilitates users to spontaneously recognize concepts associated misclassifications.
We also propose two statistical approaches, relative concept association to better quantify the associations between a concept and instances, and debias method to mitigate spurious associations.
- Score: 13.97436974677563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification models learn to generalize the associations between data
samples and their target classes. However, researchers have increasingly
observed that machine learning practice easily leads to systematic errors in AI
applications, a phenomenon referred to as AI blindspots. Such blindspots arise
when a model is trained with training samples (e.g., cat/dog classification)
where important patterns (e.g., black cats) are missing or
periphery/undesirable patterns (e.g., dogs with grass background) are
misleading towards a certain class. Even more sophisticated techniques cannot
guarantee to capture, reason about, and prevent the spurious associations. In
this work, we propose ESCAPE, a visual analytic system that promotes a
human-in-the-loop workflow for countering systematic errors. By allowing human
users to easily inspect spurious associations, the system facilitates users to
spontaneously recognize concepts associated misclassifications and evaluate
mitigation strategies that can reduce biased associations. We also propose two
statistical approaches, relative concept association to better quantify the
associations between a concept and instances, and debias method to mitigate
spurious associations. We demonstrate the utility of our proposed ESCAPE system
and statistical measures through extensive evaluation including quantitative
experiments, usage scenarios, expert interviews, and controlled user
experiments.
Related papers
- Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - Discover and Cure: Concept-aware Mitigation of Spurious Correlation [14.579651844642616]
Deep neural networks often rely on spurious correlations to make predictions.
We propose an interpretable framework, Discover and Cure (DISC) to tackle the issue.
DISC provides superior generalization ability and interpretability than the existing approaches.
arXiv Detail & Related papers (2023-05-01T04:19:27Z) - Fairness in Forecasting of Observations of Linear Dynamical Systems [10.762748665074794]
We introduce two natural notions of fairness in time-series forecasting problems: fairness and instantaneous fairness.
We show globally convergent methods for optimisation of fairness-constrained learning problems.
Our results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate the efficacy of our methods.
arXiv Detail & Related papers (2022-09-12T14:32:12Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Hybrid Dynamic Contrast and Probability Distillation for Unsupervised
Person Re-Id [109.1730454118532]
Unsupervised person re-identification (Re-Id) has attracted increasing attention due to its practical application in the read-world video surveillance system.
We present the hybrid dynamic cluster contrast and probability distillation algorithm.
It formulates the unsupervised Re-Id problem into an unified local-to-global dynamic contrastive learning and self-supervised probability distillation framework.
arXiv Detail & Related papers (2021-09-29T02:56:45Z) - Towards Fair Affective Robotics: Continual Learning for Mitigating Bias
in Facial Expression and Action Unit Recognition [5.478764356647437]
We propose Continual Learning (CL) as an effective strategy to enhance fairness in Facial Expression Recognition (FER) systems.
We compare different state-of-the-art bias mitigation approaches with CL-based strategies for fairness on expression recognition and Action Unit (AU) detection tasks.
Our experiments show that CL-based methods, on average, outperform popular bias mitigation techniques.
arXiv Detail & Related papers (2021-03-15T18:36:14Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.