Smooth-edged Perturbations Improve Perturbation-based Image Explanations
- URL: http://arxiv.org/abs/2409.04116v1
- Date: Fri, 6 Sep 2024 08:33:26 GMT
- Title: Smooth-edged Perturbations Improve Perturbation-based Image Explanations
- Authors: Gustav Grund Pihlgren, Kary Främling,
- Abstract summary: Perturbation-based post-hoc image explanation methods are commonly used to explain image prediction models.
Due to the intractability of perturbing each pixel individually, images are typically attributed to larger segments.
This work tests many combinations of mask sampling, segmentation techniques, smoothing, and attribution calculation.
- Score: 1.1663475941322277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perturbation-based post-hoc image explanation methods are commonly used to explain image prediction models by perturbing parts of the input to measure how those parts affect the output. Due to the intractability of perturbing each pixel individually, images are typically attributed to larger segments. The Randomized Input Sampling for Explanations (RISE) method solved this issue by using smooth perturbation masks. While this method has proven effective and popular, it has not been investigated which parts of the method are responsible for its success. This work tests many combinations of mask sampling, segmentation techniques, smoothing, and attribution calculation. The results show that the RISE-style pixel attribution is beneficial to all evaluated methods. Furthermore, it is shown that attribution calculation is the least impactful parameter. The implementation of this work is available online: https://github.com/guspih/post-hoc-image-perturbation.
Related papers
- Random Walk on Pixel Manifolds for Anomaly Segmentation of Complex Driving Scenes [1.3581810800092389]
We propose a novel method called Random Walk on Pixel Manifolds (RWPM)
RWPM utilizes random walks to reveal the intrinsic relationships among pixels to refine the pixel embeddings.
Our experiments show that RWPM consistently improve the performance of the existing anomaly segmentation methods.
arXiv Detail & Related papers (2024-04-27T17:16:45Z) - Learning to Rank Patches for Unbiased Image Redundancy Reduction [80.93989115541966]
Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated.
Existing approaches strive to overcome this limitation by reducing less meaningful image regions.
We propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches.
arXiv Detail & Related papers (2024-03-31T13:12:41Z) - DPPMask: Masked Image Modeling with Determinantal Point Processes [49.65141962357528]
Masked Image Modeling (MIM) has achieved impressive representative performance with the aim of reconstructing randomly masked images.
We show that uniformly random masking widely used in previous works unavoidably loses some key objects and changes original semantic information.
To address this issue, we augment MIM with a new masking strategy namely the DPPMask.
Our method is simple yet effective and requires no extra learnable parameters when implemented within various frameworks.
arXiv Detail & Related papers (2023-03-13T13:40:39Z) - Learning to Annotate Part Segmentation with Gradient Matching [58.100715754135685]
This paper focuses on tackling semi-supervised part segmentation tasks by generating high-quality images with a pre-trained GAN.
In particular, we formulate the annotator learning as a learning-to-learn problem.
We show that our method can learn annotators from a broad range of labelled images including real images, generated images, and even analytically rendered images.
arXiv Detail & Related papers (2022-11-06T01:29:22Z) - What can we learn about a generated image corrupting its latent
representation? [57.1841740328509]
We investigate the hypothesis that we can predict image quality based on its latent representation in the GANs bottleneck.
We achieve this by corrupting the latent representation with noise and generating multiple outputs.
arXiv Detail & Related papers (2022-10-12T14:40:32Z) - Interpretations Steered Network Pruning via Amortized Inferred Saliency
Maps [85.49020931411825]
Convolutional Neural Networks (CNNs) compression is crucial to deploying these models in edge devices with limited resources.
We propose to address the channel pruning problem from a novel perspective by leveraging the interpretations of a model to steer the pruning process.
We tackle this challenge by introducing a selector model that predicts real-time smooth saliency masks for pruned models.
arXiv Detail & Related papers (2022-09-07T01:12:11Z) - Few-shot semantic segmentation via mask aggregation [5.886986014593717]
Few-shot semantic segmentation aims to recognize novel classes with only very few labelled data.
Previous works have typically regarded it as a pixel-wise classification problem.
We introduce a mask-based classification method for addressing this problem.
arXiv Detail & Related papers (2022-02-15T07:13:09Z) - Contrastive Unpaired Translation using Focal Loss for Patch
Classification [0.0]
Contrastive Unpaired Translation is a new method for image-to-image translation.
We show that using focal loss in place of cross-entropy loss within the PatchNCE loss can improve on the model's performance.
arXiv Detail & Related papers (2021-09-25T20:22:33Z) - Just Noticeable Difference for Machine Perception and Generation of
Regularized Adversarial Images with Minimal Perturbation [8.920717493647121]
We introduce a measure for machine perception inspired by the concept of Just Noticeable Difference (JND) of human perception.
We suggest an adversarial image generation algorithm, which iteratively distorts an image by an additive noise until the machine learning model detects the change in the image by outputting a false label.
We evaluate the adversarial images generated by our algorithm both qualitatively and quantitatively on CIFAR10, ImageNet, and MS COCO datasets.
arXiv Detail & Related papers (2021-02-16T11:01:55Z) - High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling [122.06593036862611]
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-05-24T13:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.