Hierarchical Dynamic Masks for Visual Explanation of Neural Networks
- URL: http://arxiv.org/abs/2301.04970v1
- Date: Thu, 12 Jan 2023 12:24:49 GMT
- Title: Hierarchical Dynamic Masks for Visual Explanation of Neural Networks
- Authors: Yitao Peng, Longzhen Yang, Yihang Liu, Lianghua He
- Abstract summary: Saliency methods generating visual explanatory maps representing the importance of image pixels for model classification is a popular technique for explaining neural network decisions.
We propose hierarchical dynamic masks (HDM), a novel explanatory maps generation method, to enhance the granularity and comprehensiveness of saliency maps.
The proposed method outperformed previous approaches significantly in terms of recognition and localization capabilities when tested on natural and medical datasets.
- Score: 5.333582981327497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Saliency methods generating visual explanatory maps representing the
importance of image pixels for model classification is a popular technique for
explaining neural network decisions. Hierarchical dynamic masks (HDM), a novel
explanatory maps generation method, is proposed in this paper to enhance the
granularity and comprehensiveness of saliency maps. First, we suggest the
dynamic masks (DM), which enables multiple small-sized benchmark mask vectors
to roughly learn the critical information in the image through an optimization
method. Then the benchmark mask vectors guide the learning of large-sized
auxiliary mask vectors so that their superimposed mask can accurately learn
fine-grained pixel importance information and reduce the sensitivity to
adversarial perturbations. In addition, we construct the HDM by concatenating
DM modules. These DM modules are used to find and fuse the regions of interest
in the remaining neural network classification decisions in the mask image in a
learning-based way. Since HDM forces DM to perform importance analysis in
different areas, it makes the fused saliency map more comprehensive. The
proposed method outperformed previous approaches significantly in terms of
recognition and localization capabilities when tested on natural and medical
datasets.
Related papers
- MaskInversion: Localized Embeddings via Optimization of Explainability Maps [49.50785637749757]
MaskInversion generates a context-aware embedding for a query image region specified by a mask at test time.
It can be used for a broad range of tasks, including open-vocabulary class retrieval, referring expression comprehension, as well as for localized captioning and image generation.
arXiv Detail & Related papers (2024-07-29T14:21:07Z) - ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders [53.3185750528969]
Masked AutoEncoders (MAE) have emerged as a robust self-supervised framework.
We introduce a data-independent method, termed ColorMAE, which generates different binary mask patterns by filtering random noise.
We demonstrate our strategy's superiority in downstream tasks compared to random masking.
arXiv Detail & Related papers (2024-07-17T22:04:00Z) - AnatoMask: Enhancing Medical Image Segmentation with Reconstruction-guided Self-masking [5.844539603252746]
Masked image modeling (MIM) has shown effectiveness by reconstructing randomly masked images to learn detailed representations.
We propose AnatoMask, a novel MIM method that leverages reconstruction loss to dynamically identify and mask out anatomically significant regions.
arXiv Detail & Related papers (2024-07-09T00:15:52Z) - DynaMask: Dynamic Mask Selection for Instance Segmentation [21.50329070835023]
We develop a Mask Switch Module (MSM) with negligible computational cost to select the most suitable mask resolution for each instance.
The proposed method, namely DynaMask, brings consistent and noticeable performance improvements over other state-of-the-arts at a moderate computation overhead.
arXiv Detail & Related papers (2023-03-14T13:01:25Z) - Improving Masked Autoencoders by Learning Where to Mask [65.89510231743692]
Masked image modeling is a promising self-supervised learning method for visual data.
We present AutoMAE, a framework that uses Gumbel-Softmax to interlink an adversarially-trained mask generator and a mask-guided image modeling process.
In our experiments, AutoMAE is shown to provide effective pretraining models on standard self-supervised benchmarks and downstream tasks.
arXiv Detail & Related papers (2023-03-12T05:28:55Z) - MPS-AMS: Masked Patches Selection and Adaptive Masking Strategy Based
Self-Supervised Medical Image Segmentation [46.76171191827165]
We propose masked patches selection and adaptive masking strategy based self-supervised medical image segmentation method, named MPS-AMS.
Our proposed method greatly outperforms the state-of-the-art self-supervised baselines.
arXiv Detail & Related papers (2023-02-27T11:57:06Z) - Shape-Aware Masking for Inpainting in Medical Imaging [49.61617087640379]
Inpainting has been proposed as a successful deep learning technique for unsupervised medical image model discovery.
We introduce a method for generating shape-aware masks for inpainting, which aims at learning the statistical shape prior.
We propose an unsupervised guided masking approach based on an off-the-shelf inpainting model and a superpixel over-segmentation algorithm.
arXiv Detail & Related papers (2022-07-12T18:35:17Z) - Layered Depth Refinement with Mask Guidance [61.10654666344419]
We formulate a novel problem of mask-guided depth refinement that utilizes a generic mask to refine the depth prediction of SIDE models.
Our framework performs layered refinement and inpainting/outpainting, decomposing the depth map into two separate layers signified by the mask and the inverse mask.
We empirically show that our method is robust to different types of masks and initial depth predictions, accurately refining depth values in inner and outer mask boundary regions.
arXiv Detail & Related papers (2022-06-07T06:42:44Z) - Adversarial Masking for Self-Supervised Learning [81.25999058340997]
Masked image model (MIM) framework for self-supervised learning, ADIOS, is proposed.
It simultaneously learns a masking function and an image encoder using an adversarial objective.
It consistently improves on state-of-the-art self-supervised learning (SSL) methods on a variety of tasks and datasets.
arXiv Detail & Related papers (2022-01-31T10:23:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.