Visual Auditor: Interactive Visualization for Detection and
Summarization of Model Biases
- URL: http://arxiv.org/abs/2206.12540v1
- Date: Sat, 25 Jun 2022 02:48:27 GMT
- Title: Visual Auditor: Interactive Visualization for Detection and
Summarization of Model Biases
- Authors: David Munechika, Zijie J. Wang, Jack Reidy, Josh Rubin, Krishna Gade,
Krishnaram Kenthapadi, Duen Horng Chau
- Abstract summary: As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their deployment.
Recent research has developed algorithms for effectively identifying intersectional bias in the form of interpretable, underperforming subsets (or slices) of the data.
We propose Visual Auditor, an interactive visualization tool for auditing and summarizing model biases.
- Score: 18.434430375939755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning (ML) systems become increasingly widespread, it is
necessary to audit these systems for biases prior to their deployment. Recent
research has developed algorithms for effectively identifying intersectional
bias in the form of interpretable, underperforming subsets (or slices) of the
data. However, these solutions and their insights are limited without a tool
for visually understanding and interacting with the results of these
algorithms. We propose Visual Auditor, an interactive visualization tool for
auditing and summarizing model biases. Visual Auditor assists model validation
by providing an interpretable overview of intersectional bias (bias that is
present when examining populations defined by multiple features), details about
relationships between problematic data slices, and a comparison between
underperforming and overperforming data slices in a model. Our open-source tool
runs directly in both computational notebooks and web browsers, making model
auditing accessible and easily integrated into current ML development
workflows. An observational user study in collaboration with domain experts at
Fiddler AI highlights that our tool can help ML practitioners identify and
understand model biases.
Related papers
- LLM-assisted Explicit and Implicit Multi-interest Learning Framework for Sequential Recommendation [50.98046887582194]
We propose an explicit and implicit multi-interest learning framework to model user interests on two levels: behavior and semantics.
The proposed EIMF framework effectively and efficiently combines small models with LLM to improve the accuracy of multi-interest modeling.
arXiv Detail & Related papers (2024-11-14T13:00:23Z) - Matchmaker: Self-Improving Large Language Model Programs for Schema Matching [60.23571456538149]
We propose a compositional language model program for schema matching, comprised of candidate generation, refinement and confidence scoring.
Matchmaker self-improves in a zero-shot manner without the need for labeled demonstrations.
Empirically, we demonstrate on real-world medical schema matching benchmarks that Matchmaker outperforms previous ML-based approaches.
arXiv Detail & Related papers (2024-10-31T16:34:03Z) - On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization [86.61052121715689]
MatPlotAgent is a model-agnostic framework designed to automate scientific data visualization tasks.
MatPlotBench is a high-quality benchmark consisting of 100 human-verified test cases.
arXiv Detail & Related papers (2024-02-18T04:28:28Z) - AttributionScanner: A Visual Analytics System for Model Validation with Metadata-Free Slice Finding [29.07617945233152]
Data slice finding is an emerging technique for validating machine learning (ML) models by identifying and analyzing subgroups in a dataset that exhibit poor performance.
This approach faces significant challenges, including the laborious and costly requirement for additional metadata.
We introduce AttributionScanner, an innovative human-in-the-loop Visual Analytics (VA) system, designed for metadata-free data slice finding.
Our system identifies interpretable data slices that involve common model behaviors and visualizes these patterns through an Attribution Mosaic design.
arXiv Detail & Related papers (2024-01-12T09:17:32Z) - Towards Better Modeling with Missing Data: A Contrastive Learning-based
Visual Analytics Perspective [7.577040836988683]
Missing data can pose a challenge for machine learning (ML) modeling.
Current approaches are categorized into feature imputation and label prediction.
This study proposes a Contrastive Learning framework to model observed data with missing values.
arXiv Detail & Related papers (2023-09-18T13:16:24Z) - VLSlice: Interactive Vision-and-Language Slice Discovery [17.8634551024147]
VLSlice is an interactive system enabling user-guided discovery of coherent representation-level subgroups with consistent visiolinguistic behavior.
We show that VLSlice enables users to quickly generate diverse high-coherency slices in a user study and release the tool publicly.
arXiv Detail & Related papers (2023-09-13T04:02:38Z) - Discover, Explanation, Improvement: An Automatic Slice Detection
Framework for Natural Language Processing [72.14557106085284]
slice detection models (SDM) automatically identify underperforming groups of datapoints.
This paper proposes a benchmark named "Discover, Explain, improve (DEIM)" for classification NLP tasks.
Our evaluation shows that Edisa can accurately select error-prone datapoints with informative semantic features.
arXiv Detail & Related papers (2022-11-08T19:00:00Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - A Unified Comparison of User Modeling Techniques for Predicting Data
Interaction and Detecting Exploration Bias [17.518601254380275]
We compare and rank eight user modeling algorithms based on their performance on a diverse set of four user study datasets.
Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.
arXiv Detail & Related papers (2022-08-09T19:51:10Z) - AdViCE: Aggregated Visual Counterfactual Explanations for Machine
Learning Model Validation [9.996986104171754]
We introduce AdViCE, a visual analytics tool that aims to guide users in black-box model debug and validation.
The solution rests on two main visual user interface innovations: (1) an interactive visualization that enables the comparison of decisions on user-defined data subsets; (2) an algorithm and visual design to compute and visualize counterfactual explanations.
arXiv Detail & Related papers (2021-09-12T22:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.