"Why Here and Not There?" -- Diverse Contrasting Explanations of
Dimensionality Reduction
- URL: http://arxiv.org/abs/2206.07391v1
- Date: Wed, 15 Jun 2022 08:54:39 GMT
- Title: "Why Here and Not There?" -- Diverse Contrasting Explanations of
Dimensionality Reduction
- Authors: Andr\'e Artelt, Alexander Schulz, Barbara Hammer
- Abstract summary: We introduce the concept of contrasting explanations for dimensionality reduction.
We apply a realization of this concept to the specific application of explaining two dimensional data visualization.
- Score: 75.97774982432976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dimensionality reduction is a popular preprocessing and a widely used tool in
data mining. Transparency, which is usually achieved by means of explanations,
is nowadays a widely accepted and crucial requirement of machine learning based
systems like classifiers and recommender systems. However, transparency of
dimensionality reduction and other data mining tools have not been considered
much yet, still it is crucial to understand their behavior -- in particular
practitioners might want to understand why a specific sample got mapped to a
specific location. In order to (locally) understand the behavior of a given
dimensionality reduction method, we introduce the abstract concept of
contrasting explanations for dimensionality reduction, and apply a realization
of this concept to the specific application of explaining two dimensional data
visualization.
Related papers
- Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - DimenFix: A novel meta-dimensionality reduction method for feature
preservation [64.0476282000118]
We propose a novel meta-method, DimenFix, which can be operated upon any base dimensionality reduction method that involves a gradient-descent-like process.
By allowing users to define the importance of different features, which is considered in dimensionality reduction, DimenFix creates new possibilities to visualize and understand a given dataset.
arXiv Detail & Related papers (2022-11-30T05:35:22Z) - Local Explanation of Dimensionality Reduction [9.202274047046151]
We introduce LXDR, a technique capable of providing local interpretations of the output of Dimensionality Reduction techniques.
Experiment results and two LXDR use case examples are presented to evaluate its usefulness.
arXiv Detail & Related papers (2022-04-29T10:56:12Z) - The Two Dimensions of Worst-case Training and the Integrated Effect for
Out-of-domain Generalization [95.34898583368154]
We propose a new, simple yet effective, generalization to train machine learning models.
We name our method W2D following the concept of "Worst-case along Two Dimensions"
arXiv Detail & Related papers (2022-04-09T04:14:55Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - Efficient computation of contrastive explanations [8.132423340684568]
We study the relation of contrastive and counterfactual explanations.
We propose a 2-phase algorithm for efficiently computing (plausible) positives of many standard machine learning models.
arXiv Detail & Related papers (2020-10-06T11:50:28Z) - Overcoming the curse of dimensionality with Laplacian regularization in
semi-supervised learning [80.20302993614594]
We provide a statistical analysis to overcome drawbacks of Laplacian regularization.
We unveil a large body of spectral filtering methods that exhibit desirable behaviors.
We provide realistic computational guidelines in order to make our method usable with large amounts of data.
arXiv Detail & Related papers (2020-09-09T14:28:54Z) - Interpretable Representations in Explainable AI: From Theory to Practice [7.031336702345381]
Interpretable representations are the backbone of many explainers that target black-box predictive systems.
We study properties of interpretable representations that encode presence and absence of human-comprehensible concepts.
arXiv Detail & Related papers (2020-08-16T21:44:03Z) - Supervised Visualization for Data Exploration [9.742277703732187]
We describe a novel supervised visualization technique based on random forest proximities and diffusion-based dimensionality reduction.
Our approach is robust to noise and parameter tuning, thus making it simple to use while producing reliable visualizations for data exploration.
arXiv Detail & Related papers (2020-06-15T19:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.