Local Explanation of Dimensionality Reduction
- URL: http://arxiv.org/abs/2204.14012v1
- Date: Fri, 29 Apr 2022 10:56:12 GMT
- Title: Local Explanation of Dimensionality Reduction
- Authors: Avraam Bardos, Ioannis Mollas, Nick Bassiliades, Grigorios Tsoumakas
- Abstract summary: We introduce LXDR, a technique capable of providing local interpretations of the output of Dimensionality Reduction techniques.
Experiment results and two LXDR use case examples are presented to evaluate its usefulness.
- Score: 9.202274047046151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dimensionality reduction (DR) is a popular method for preparing and analyzing
high-dimensional data. Reduced data representations are less computationally
intensive and easier to manage and visualize, while retaining a significant
percentage of their original information. Aside from these advantages, these
reduced representations can be difficult or impossible to interpret in most
circumstances, especially when the DR approach does not provide further
information about which features of the original space led to their
construction. This problem is addressed by Interpretable Machine Learning, a
subfield of Explainable Artificial Intelligence that addresses the opacity of
machine learning models. However, current research on Interpretable Machine
Learning has been focused on supervised tasks, leaving unsupervised tasks like
Dimensionality Reduction unexplored. In this paper, we introduce LXDR, a
technique capable of providing local interpretations of the output of DR
techniques. Experiment results and two LXDR use case examples are presented to
evaluate its usefulness.
Related papers
- RAZOR: Refining Accuracy by Zeroing Out Redundancies [4.731404257629232]
In the deep learning domain, the utility of additional data is contingent on its informativeness.
We propose RAZOR, a novel instance selection technique designed to extract a significantly smaller yet sufficiently informative subset from a larger set of instances.
Unlike many techniques in the literature, RAZOR is capable of operating in both supervised and unsupervised settings.
arXiv Detail & Related papers (2024-10-18T08:04:31Z) - DimVis: Interpreting Visual Clusters in Dimensionality Reduction With Explainable Boosting Machine [3.2748787252933442]
DimVis is a tool that employs supervised Explainable Boosting Machine (EBM) models as an interpretation assistant for DR projections.
Our tool facilitates high-dimensional data analysis by providing an interpretation of feature relevance in visual clusters.
arXiv Detail & Related papers (2024-02-10T04:50:36Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Explainable Attention for Few-shot Learning and Beyond [7.044125601403848]
We introduce a novel framework for achieving explainable hard attention finding, specifically tailored for few-shot learning scenarios.
Our approach employs deep reinforcement learning to implement the concept of hard attention, directly impacting raw input data.
arXiv Detail & Related papers (2023-10-11T18:33:17Z) - Sequential Action-Induced Invariant Representation for Reinforcement
Learning [1.2046159151610263]
How to accurately learn task-relevant state representations from high-dimensional observations with visual distractions is a challenging problem in visual reinforcement learning.
We propose a Sequential Action-induced invariant Representation (SAR) method, in which the encoder is optimized by an auxiliary learner to only preserve the components that follow the control signals of sequential actions.
arXiv Detail & Related papers (2023-09-22T05:31:55Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
Recognition [57.08108545219043]
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
Existing literature addresses this challenge by employing local-based representation approaches.
This article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition.
arXiv Detail & Related papers (2023-05-12T00:13:17Z) - "Why Here and Not There?" -- Diverse Contrasting Explanations of
Dimensionality Reduction [75.97774982432976]
We introduce the concept of contrasting explanations for dimensionality reduction.
We apply a realization of this concept to the specific application of explaining two dimensional data visualization.
arXiv Detail & Related papers (2022-06-15T08:54:39Z) - Overcoming the curse of dimensionality with Laplacian regularization in
semi-supervised learning [80.20302993614594]
We provide a statistical analysis to overcome drawbacks of Laplacian regularization.
We unveil a large body of spectral filtering methods that exhibit desirable behaviors.
We provide realistic computational guidelines in order to make our method usable with large amounts of data.
arXiv Detail & Related papers (2020-09-09T14:28:54Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - The Dilemma Between Data Transformations and Adversarial Robustness for
Time Series Application Systems [1.2056495277232115]
Adrial examples, or nearly indistinguishable inputs created by an attacker, significantly reduce machine learning accuracy.
This work explores how data transformations may impact an adversary's ability to create effective adversarial samples on a recurrent neural network.
A data transformation technique reduces the vulnerability to adversarial examples only if it approximates the dataset's intrinsic dimension.
arXiv Detail & Related papers (2020-06-18T22:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.