AdViCE: Aggregated Visual Counterfactual Explanations for Machine
Learning Model Validation
- URL: http://arxiv.org/abs/2109.05629v1
- Date: Sun, 12 Sep 2021 22:52:12 GMT
- Title: AdViCE: Aggregated Visual Counterfactual Explanations for Machine
Learning Model Validation
- Authors: Oscar Gomez, Steffen Holter, Jun Yuan, Enrico Bertini
- Abstract summary: We introduce AdViCE, a visual analytics tool that aims to guide users in black-box model debug and validation.
The solution rests on two main visual user interface innovations: (1) an interactive visualization that enables the comparison of decisions on user-defined data subsets; (2) an algorithm and visual design to compute and visualize counterfactual explanations.
- Score: 9.996986104171754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rapid improvements in the performance of machine learning models have pushed
them to the forefront of data-driven decision-making. Meanwhile, the increased
integration of these models into various application domains has further
highlighted the need for greater interpretability and transparency. To identify
problems such as bias, overfitting, and incorrect correlations, data scientists
require tools that explain the mechanisms with which these model decisions are
made. In this paper we introduce AdViCE, a visual analytics tool that aims to
guide users in black-box model debugging and validation. The solution rests on
two main visual user interface innovations: (1) an interactive visualization
design that enables the comparison of decisions on user-defined data subsets;
(2) an algorithm and visual design to compute and visualize counterfactual
explanations - explanations that depict model outcomes when data features are
perturbed from their original values. We provide a demonstration of the tool
through a use case that showcases the capabilities and potential limitations of
the proposed approach.
Related papers
- Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - CogCoM: Train Large Vision-Language Models Diving into Details through Chain of Manipulations [61.21923643289266]
Chain of Manipulations is a mechanism that enables Vision-Language Models to solve problems step-by-step with evidence.
After training, models can solve various visual problems by eliciting intrinsic manipulations (e.g., grounding, zoom in) actively without involving external tools.
Our trained model, textbfCogCoM, achieves state-of-the-art performance across 9 benchmarks from 4 categories.
arXiv Detail & Related papers (2024-02-06T18:43:48Z) - AttributionScanner: A Visual Analytics System for Model Validation with Metadata-Free Slice Finding [29.07617945233152]
Data slice finding is an emerging technique for validating machine learning (ML) models by identifying and analyzing subgroups in a dataset that exhibit poor performance.
This approach faces significant challenges, including the laborious and costly requirement for additional metadata.
We introduce AttributionScanner, an innovative human-in-the-loop Visual Analytics (VA) system, designed for metadata-free data slice finding.
Our system identifies interpretable data slices that involve common model behaviors and visualizes these patterns through an Attribution Mosaic design.
arXiv Detail & Related papers (2024-01-12T09:17:32Z) - InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates [18.793275018467163]
Deep learning models are widely used in critical applications, highlighting the need for pre-deployment model understanding and improvement.
Visual concept-based methods, while increasingly used for this purpose, face challenges: (1) most concepts lack interpretability, (2) existing methods require model knowledge, often unavailable at run time, and (3) there lacks a no-code method for post-understanding model improvement.
We present InterVLS, which facilitates model understanding by discovering text-aligned concepts, measuring their influence with model-agnostic linear surrogates.
arXiv Detail & Related papers (2023-11-06T21:30:59Z) - Towards Better Modeling with Missing Data: A Contrastive Learning-based
Visual Analytics Perspective [7.577040836988683]
Missing data can pose a challenge for machine learning (ML) modeling.
Current approaches are categorized into feature imputation and label prediction.
This study proposes a Contrastive Learning framework to model observed data with missing values.
arXiv Detail & Related papers (2023-09-18T13:16:24Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Multivariate Data Explanation by Jumping Emerging Patterns Visualization [78.6363825307044]
We present VAX (multiVariate dAta eXplanation), a new VA method to support the identification and visual interpretation of patterns in multivariate data sets.
Unlike the existing similar approaches, VAX uses the concept of Jumping Emerging Patterns to identify and aggregate several diversified patterns, producing explanations through logic combinations of data variables.
arXiv Detail & Related papers (2021-06-21T13:49:44Z) - Unified Graph Structured Models for Video Understanding [93.72081456202672]
We propose a message passing graph neural network that explicitly models relational-temporal relations.
We show how our method is able to more effectively model relationships between relevant entities in the scene.
arXiv Detail & Related papers (2021-03-29T14:37:35Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z) - ViCE: Visual Counterfactual Explanations for Machine Learning Models [13.94542147252982]
We present an interactive visual analytics tool, ViCE, that generates counterfactual explanations to contextualize and evaluate model decisions.
Results are effectively displayed in a visual interface where counterfactual explanations are highlighted and interactive methods are provided for users to explore the data and model.
arXiv Detail & Related papers (2020-03-05T04:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.