DSEG-LIME: Improving Image Explanation by Hierarchical Data-Driven Segmentation
- URL: http://arxiv.org/abs/2403.07733v3
- Date: Tue, 08 Oct 2024 07:26:22 GMT
- Title: DSEG-LIME: Improving Image Explanation by Hierarchical Data-Driven Segmentation
- Authors: Patrick Knab, Sascha Marton, Christian Bartelt,
- Abstract summary: LIME (Local Interpretable Model-agnostic Explanations) is a well-known XAI framework for image analysis.
We introduce DSEG-LIME (Data-Driven LIME), featuring a data-driven segmentation for human-recognized feature generation.
We benchmark DSEG-LIME on pre-trained models with images from the ImageNet dataset.
- Score: 2.355460994057843
- License:
- Abstract: Explainable Artificial Intelligence is critical in unraveling decision-making processes in complex machine learning models. LIME (Local Interpretable Model-agnostic Explanations) is a well-known XAI framework for image analysis. It utilizes image segmentation to create features to identify relevant areas for classification. Consequently, poor segmentation can compromise the consistency of the explanation and undermine the importance of the segments, affecting the overall interpretability. Addressing these challenges, we introduce DSEG-LIME (Data-Driven Segmentation LIME), featuring: i) a data-driven segmentation for human-recognized feature generation, and ii) a hierarchical segmentation procedure through composition. We benchmark DSEG-LIME on pre-trained models with images from the ImageNet dataset - scenarios without domain-specific knowledge. The analysis includes a quantitative evaluation using established XAI metrics, complemented by a qualitative assessment through a user study. Our findings demonstrate that DSEG outperforms in most of the XAI metrics and enhances the alignment of explanations with human-recognized concepts, significantly improving interpretability. The code is available under: https://github. com/patrick-knab/DSEG-LIME. The code is available under: https://github. com/patrick-knab/DSEG-LIME
Related papers
- Underwater SONAR Image Classification and Analysis using LIME-based Explainable Artificial Intelligence [0.0]
This paper explores the application of the eXplainable Artificial Intelligence (XAI) tool to interpret the underwater image classification results.
An extensive analysis of transfer learning techniques for image classification using benchmark Convolutional Neural Network (CNN) architectures is carried out.
XAI techniques highlight interpretability of the results in a more human-compliant way, thus boosting our confidence and reliability.
arXiv Detail & Related papers (2024-08-23T04:54:18Z) - SegXAL: Explainable Active Learning for Semantic Segmentation in Driving Scene Scenarios [1.2172320168050466]
We propose a novel Explainable Active Learning model, XAL-based semantic segmentation model "SegXAL"
SegXAL can (i) effectively utilize the unlabeled data, (ii) facilitate the "Human-in-the-loop" paradigm, and (iii) augment the model decisions in an interpretable way.
In particular, we investigate the application of the SegXAL model for semantic segmentation in driving scene scenarios.
arXiv Detail & Related papers (2024-08-08T14:19:11Z) - Extending CAM-based XAI methods for Remote Sensing Imagery Segmentation [7.735470452949379]
We introduce a new XAI evaluation methodology and metric based on "Entropy" to measure the model uncertainty.
We show that using Entropy to monitor the model uncertainty in segmenting the pixels within the target class is more suitable.
arXiv Detail & Related papers (2023-10-03T07:01:23Z) - Trainable Noise Model as an XAI evaluation method: application on Sobol
for remote sensing image segmentation [0.5735035463793009]
This paper adapts the gradient-free Sobol XAI method for semantic segmentation.
A benchmark analysis is conducted to evaluate and compare performance of three XAI methods.
arXiv Detail & Related papers (2023-10-03T06:51:48Z) - LISA: Reasoning Segmentation via Large Language Model [68.24075852136761]
We propose a new segmentation task -- reasoning segmentation.
The task is designed to output a segmentation mask given a complex and implicit query text.
We present LISA: large Language Instructed Assistant, which inherits the language generation capabilities of multimodal Large Language Models.
arXiv Detail & Related papers (2023-08-01T17:50:17Z) - Extracting Semantic Knowledge from GANs with Unsupervised Learning [65.32631025780631]
Generative Adversarial Networks (GANs) encode semantics in feature maps in a linearly separable form.
We propose a novel clustering algorithm, named KLiSH, which leverages the linear separability to cluster GAN's features.
KLiSH succeeds in extracting fine-grained semantics of GANs trained on datasets of various objects.
arXiv Detail & Related papers (2022-11-30T03:18:16Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - GLIME: A new graphical methodology for interpretable model-agnostic
explanations [0.0]
This paper contributes to the development of a novel graphical explainability tool for black box models.
The proposed XAI methodology, termed as gLIME, provides graphical model-agnostic explanations either at the global (for the entire dataset) or the local scale (for specific data points)
arXiv Detail & Related papers (2021-07-21T08:06:40Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - Gradient-Induced Co-Saliency Detection [81.54194063218216]
Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images.
In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection method.
arXiv Detail & Related papers (2020-04-28T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.