Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation Models
- URL: http://arxiv.org/abs/2403.07733v4
- Date: Mon, 03 Feb 2025 10:44:34 GMT
- Title: Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation Models
- Authors: Patrick Knab, Sascha Marton, Christian Bartelt,
- Abstract summary: LIME is a popular XAI framework for unraveling decision-making processes in vision machine-learning models.
We introduce the DSEG-LIME (Data-Driven LIME) framework, featuring a data-driven segmentation for human-recognized feature generation.
Our findings demonstrate that DSEG outperforms on several XAI metrics on pre-trained ImageNet models.
- Score: 2.355460994057843
- License:
- Abstract: LIME (Local Interpretable Model-agnostic Explanations) is a popular XAI framework for unraveling decision-making processes in vision machine-learning models. The technique utilizes image segmentation methods to identify fixed regions for calculating feature importance scores as explanations. Therefore, poor segmentation can weaken the explanation and reduce the importance of segments, ultimately affecting the overall clarity of interpretation. To address these challenges, we introduce the DSEG-LIME (Data-Driven Segmentation LIME) framework, featuring: i) a data-driven segmentation for human-recognized feature generation by foundation model integration, and ii) a user-steered granularity in the hierarchical segmentation procedure through composition. Our findings demonstrate that DSEG outperforms on several XAI metrics on pre-trained ImageNet models and improves the alignment of explanations with human-recognized concepts. The code is available under: https://github. com/patrick-knab/DSEG-LIME
Related papers
- Freestyle Sketch-in-the-Loop Image Segmentation [116.1810651297801]
We introduce a "sketch-in-the-loop" image segmentation framework, enabling the segmentation of visual concepts partially, completely, or in groupings.
This framework capitalises on the synergy between sketch-based image retrieval models and large-scale pre-trained models.
Our purpose-made augmentation strategy enhances the versatility of our sketch-guided mask generation, allowing segmentation at multiple levels.
arXiv Detail & Related papers (2025-01-27T13:07:51Z) - SegXAL: Explainable Active Learning for Semantic Segmentation in Driving Scene Scenarios [1.2172320168050466]
We propose a novel Explainable Active Learning model, XAL-based semantic segmentation model "SegXAL"
SegXAL can (i) effectively utilize the unlabeled data, (ii) facilitate the "Human-in-the-loop" paradigm, and (iii) augment the model decisions in an interpretable way.
In particular, we investigate the application of the SegXAL model for semantic segmentation in driving scene scenarios.
arXiv Detail & Related papers (2024-08-08T14:19:11Z) - Appearance-Based Refinement for Object-Centric Motion Segmentation [85.2426540999329]
We introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals.
Our approach involves a sequence-level selection mechanism that identifies accurate flow-predicted masks as exemplars.
Its performance is evaluated on multiple video segmentation benchmarks, including DAVIS, YouTube, SegTrackv2, and FBMS-59.
arXiv Detail & Related papers (2023-12-18T18:59:51Z) - Extending CAM-based XAI methods for Remote Sensing Imagery Segmentation [7.735470452949379]
We introduce a new XAI evaluation methodology and metric based on "Entropy" to measure the model uncertainty.
We show that using Entropy to monitor the model uncertainty in segmenting the pixels within the target class is more suitable.
arXiv Detail & Related papers (2023-10-03T07:01:23Z) - Exploring Open-Vocabulary Semantic Segmentation without Human Labels [76.15862573035565]
We present ZeroSeg, a novel method that leverages the existing pretrained vision-language model (VL) to train semantic segmentation models.
ZeroSeg overcomes this by distilling the visual concepts learned by VL models into a set of segment tokens, each summarizing a localized region of the target image.
Our approach achieves state-of-the-art performance when compared to other zero-shot segmentation methods under the same training data.
arXiv Detail & Related papers (2023-06-01T08:47:06Z) - Edge-aware Plug-and-play Scheme for Semantic Segmentation [4.297988192695948]
The proposed method can be seamlessly integrated into any state-of-the-art (SOTA) models with zero modification.
The experimental results indicate that the proposed method can be seamlessly integrated into any state-of-the-art (SOTA) models with zero modification.
arXiv Detail & Related papers (2023-03-18T02:17:37Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - GLIME: A new graphical methodology for interpretable model-agnostic
explanations [0.0]
This paper contributes to the development of a novel graphical explainability tool for black box models.
The proposed XAI methodology, termed as gLIME, provides graphical model-agnostic explanations either at the global (for the entire dataset) or the local scale (for specific data points)
arXiv Detail & Related papers (2021-07-21T08:06:40Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Unsupervised Learning Consensus Model for Dynamic Texture Videos
Segmentation [12.462608802359936]
We present an effective unsupervised learning consensus model for the segmentation of dynamic texture (ULCM)
In the proposed model, the set of values of the requantized local binary patterns (LBP) histogram around the pixel to be classified are used as features.
Experiments conducted on the challenging SynthDB dataset show that ULCM is significantly faster, easier to code, simple and has limited parameters.
arXiv Detail & Related papers (2020-06-29T16:40:59Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.