ProtoMask: Segmentation-Guided Prototype Learning
- URL: http://arxiv.org/abs/2510.00683v1
- Date: Wed, 01 Oct 2025 09:07:24 GMT
- Title: ProtoMask: Segmentation-Guided Prototype Learning
- Authors: Steffen Meinert, Philipp Schlinge, Nils Strodthoff, Martin Atzmueller,
- Abstract summary: We study the use of prominent image segmentation foundation models to improve the truthfulness of the mapping between embedding and input space.<n>To perceive the information of a prototypical image, we use the bounding box from each generated segmentation mask to crop the image.<n>We conduct experiments on three popular fine-grained classification datasets with a wide set of metrics.
- Score: 2.1823636275559575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: XAI gained considerable importance in recent years. Methods based on prototypical case-based reasoning have shown a promising improvement in explainability. However, these methods typically rely on additional post-hoc saliency techniques to explain the semantics of learned prototypes. Multiple critiques have been raised about the reliability and quality of such techniques. For this reason, we study the use of prominent image segmentation foundation models to improve the truthfulness of the mapping between embedding and input space. We aim to restrict the computation area of the saliency map to a predefined semantic image patch to reduce the uncertainty of such visualizations. To perceive the information of an entire image, we use the bounding box from each generated segmentation mask to crop the image. Each mask results in an individual input in our novel model architecture named ProtoMask. We conduct experiments on three popular fine-grained classification datasets with a wide set of metrics, providing a detailed overview on explainability characteristics. The comparison with other popular models demonstrates competitive performance and unique explainability features of our model. https://github.com/uos-sis/quanproto
Related papers
- Interpretable Image Classification via Non-parametric Part Prototype Learning [14.390730075612248]
Classifying images with an interpretable decision-making process is a long-standing problem in computer vision.<n>In recent years, Prototypical Part Networks has gained traction as an approach for self-explainable neural networks.<n>We present a framework for part-based interpretable image classification that learns a set of semantically distinctive object parts for each class.
arXiv Detail & Related papers (2025-03-13T10:46:53Z) - Correlation Weighted Prototype-based Self-Supervised One-Shot Segmentation of Medical Images [12.365801596593936]
Medical image segmentation is one of the domains where sufficient annotated data is not available.
We propose a prototype-based self-supervised one-way one-shot learning framework using pseudo-labels generated from superpixels.
We show that the proposed simple but potent framework performs at par with the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-12T15:38:51Z) - SSA-Seg: Semantic and Spatial Adaptive Pixel-level Classifier for Semantic Segmentation [11.176993272867396]
In this paper, we propose a novel Semantic and Spatial Adaptive (SSA-Seg) to address the challenges of semantic segmentation.
Specifically, we employ the coarse masks obtained from the fixed prototypes as a guide to adjust the fixed prototype towards the center of the semantic and spatial domains in the test image.
Results show that the proposed SSA-Seg significantly improves the segmentation performance of the baseline models with only a minimal increase in computational cost.
arXiv Detail & Related papers (2024-05-10T15:14:23Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Explore In-Context Segmentation via Latent Diffusion Models [132.26274147026854]
In-context segmentation aims to segment objects using given reference images.<n>Most existing approaches adopt metric learning or masked image modeling to build the correlation between visual prompts and input image queries.<n>This work approaches the problem from a fresh perspective - unlocking the capability of the latent diffusion model for in-context segmentation.
arXiv Detail & Related papers (2024-03-14T17:52:31Z) - Generalized Relevance Learning Grassmann Quantization [0.0]
A popular way to model image sets is subspaces, which form a manifold called the Grassmann manifold.
We extend the application of Generalized Relevance Learning Vector Quantization to deal with Grassmann manifold.
We apply it to several recognition tasks including handwritten digit recognition, face recognition, activity recognition, and object recognition.
arXiv Detail & Related papers (2024-03-14T08:53:01Z) - Delving into Shape-aware Zero-shot Semantic Segmentation [18.51025849474123]
We present textbfshape-aware zero-shot semantic segmentation.
Inspired by classical spectral methods, we propose to leverage the eigen vectors of Laplacian matrices constructed with self-supervised pixel-wise features.
Our method sets new state-of-the-art performance for zero-shot semantic segmentation on both Pascal and COCO.
arXiv Detail & Related papers (2023-04-17T17:59:46Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Sanity checks and improvements for patch visualisation in
prototype-based image classification [0.0]
We perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes.
We first show that such methods do not correctly identify the regions of interest inside of the images, and therefore do not reflect the model behaviour.
We discuss the implications of our findings for other prototype-based models sharing the same visualisation method.
arXiv Detail & Related papers (2023-01-20T15:13:04Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.