Attribution Mask: Filtering Out Irrelevant Features By Recursively
Focusing Attention on Inputs of DNNs
- URL: http://arxiv.org/abs/2102.07332v1
- Date: Mon, 15 Feb 2021 04:12:04 GMT
- Title: Attribution Mask: Filtering Out Irrelevant Features By Recursively
Focusing Attention on Inputs of DNNs
- Authors: Jae-Hong Lee, Joon-Hyuk Chang
- Abstract summary: Attribution methods calculate attributions that visually explain the predictions of deep neural networks (DNNs) by highlighting important parts of the input features.
In this study, we use the attributions that filter out irrelevant parts of the input features and then verify the effectiveness of this approach by measuring the classification accuracy of a pre-trained DNN.
- Score: 13.960152426268769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attribution methods calculate attributions that visually explain the
predictions of deep neural networks (DNNs) by highlighting important parts of
the input features. In particular, gradient-based attribution (GBA) methods are
widely used because they can be easily implemented through automatic
differentiation. In this study, we use the attributions that filter out
irrelevant parts of the input features and then verify the effectiveness of
this approach by measuring the classification accuracy of a pre-trained DNN.
This is achieved by calculating and applying an \textit{attribution mask} to
the input features and subsequently introducing the masked features to the DNN,
for which the mask is designed to recursively focus attention on the parts of
the input related to the target label. The accuracy is enhanced under a certain
condition, i.e., \textit{no implicit bias}, which can be derived based on our
theoretical insight into compressing the DNN into a single-layer neural
network. We also provide Gradient\,*\,Sign-of-Input (GxSI) to obtain the
attribution mask that further improves the accuracy. As an example, on CIFAR-10
that is modified using the attribution mask obtained from GxSI, we achieve the
accuracy ranging from 99.8\% to 99.9\% without additional training.
Related papers
- SMOOT: Saliency Guided Mask Optimized Online Training [3.024318849346373]
Saliency-Guided Training (SGT) methods try to highlight the prominent features in the model's training based on the output.
SGT makes the model's final result more interpretable by masking input partially.
We propose a novel method to determine the optimal number of masked images based on input, accuracy, and model loss during the training.
arXiv Detail & Related papers (2023-10-01T19:41:49Z) - Towards Improved Input Masking for Convolutional Neural Networks [66.99060157800403]
We propose a new masking method for CNNs we call layer masking.
We show that our method is able to eliminate or minimize the influence of the mask shape or color on the output of the model.
We also demonstrate how the shape of the mask may leak information about the class, thus affecting estimates of model reliance on class-relevant features.
arXiv Detail & Related papers (2022-11-26T19:31:49Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Fairness via Representation Neutralization [60.90373932844308]
We propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF)
RNF achieves that fairness by debiasing only the task-specific classification head of DNN models.
Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models.
arXiv Detail & Related papers (2021-06-23T22:26:29Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Implementing a foveal-pit inspired filter in a Spiking Convolutional
Neural Network: a preliminary study [0.0]
We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order encoding.
The model is trained using a variant of the backpropagation algorithm adapted to work with spiking neurons, as implemented in the Nengo library.
The network has achieved up to 90% accuracy, where loss is calculated using the cross-entropy function.
arXiv Detail & Related papers (2021-05-29T15:28:30Z) - Face Attributes as Cues for Deep Face Recognition Understanding [4.132205118175555]
We use hidden layers to predict face attributes using a variable selection technique.
Gender, eyeglasses and hat usage can be predicted with over 96% accuracy even when only a single neural output is used to predict each attribute.
Our experiments show that, inside DCNNs optimized for face identification, there exists latent neurons encoding face attributes almost as accurately as DCNNs optimized for these attributes.
arXiv Detail & Related papers (2021-05-14T19:54:24Z) - Image Inpainting by End-to-End Cascaded Refinement with Mask Awareness [66.55719330810547]
Inpainting arbitrary missing regions is challenging because learning valid features for various masked regions is nontrivial.
We propose a novel mask-aware inpainting solution that learns multi-scale features for missing regions in the encoding phase.
Our framework is validated both quantitatively and qualitatively via extensive experiments on three public datasets.
arXiv Detail & Related papers (2021-04-28T13:17:47Z) - Boundary-preserving Mask R-CNN [38.15409855290749]
We propose a conceptually simple yet effective Boundary-preserving Mask R-CNN (BMask R-CNN) to leverage object boundary information to improve mask localization accuracy.
BMask R-CNN contains a boundary-preserving mask head in which object boundary and mask are mutually learned via feature fusion blocks.
Without bells and whistles, BMask R-CNN outperforms Mask R-CNN by a considerable margin on the COCO dataset.
arXiv Detail & Related papers (2020-07-17T11:54:02Z) - Deep Feature Consistent Variational Autoencoder [46.25741696270528]
We present a novel method for constructing Variational Autoencoder (VAE)
Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE.
We also show that our method can produce latent vectors that can capture the semantic information of face expressions.
arXiv Detail & Related papers (2016-10-02T15:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.