Targeted Background Removal Creates Interpretable Feature Visualizations
- URL: http://arxiv.org/abs/2306.13178v1
- Date: Thu, 22 Jun 2023 19:39:06 GMT
- Title: Targeted Background Removal Creates Interpretable Feature Visualizations
- Authors: Ian E. Nielsen, Erik Grundeland, Joseph Snedeker, Ghulam Rasool, Ravi
P. Ramachandran
- Abstract summary: We argue that by using background removal techniques as a form of robust training, a network is forced to learn more human recognizable features.
Four different training methods were used to verify this hypothesis.
The feature visualization results show that the background removed images reveal a significant improvement over the baseline model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature visualization is used to visualize learned features for black box
machine learning models. Our approach explores an altered training process to
improve interpretability of the visualizations. We argue that by using
background removal techniques as a form of robust training, a network is forced
to learn more human recognizable features, namely, by focusing on the main
object of interest without any distractions from the background. Four different
training methods were used to verify this hypothesis. The first used unmodified
pictures. The second used a black background. The third utilized Gaussian noise
as the background. The fourth approach employed a mix of background removed
images and unmodified images. The feature visualization results show that the
background removed images reveal a significant improvement over the baseline
model. These new results displayed easily recognizable features from their
respective classes, unlike the model trained on unmodified data.
Related papers
- Data Attribution for Text-to-Image Models by Unlearning Synthesized Images [71.23012718682634]
The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image.
We propose a new approach that efficiently identifies highly-influential images.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - Supervised Deep Learning for Content-Aware Image Retargeting with
Fourier Convolutions [11.031841470875571]
Image aims to alter the size of the image with attention to the contents.
Labeled datasets are unavailable for training deep learning models in the image tasks.
Regular convolutional neural networks cannot generate images of different sizes in inference time.
arXiv Detail & Related papers (2023-06-12T19:17:44Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - CLAD: A Contrastive Learning based Approach for Background Debiasing [43.0296255565593]
We introduce a contrastive learning-based approach to mitigate the background bias in CNNs.
We achieve state-of-the-art results on the Background Challenge dataset, outperforming the previous benchmark with a margin of 4.1%.
arXiv Detail & Related papers (2022-10-06T08:33:23Z) - On Background Bias in Deep Metric Learning [5.368313160283353]
We analyze the influence of the image background on Deep Metric Learning models.
We show that replacing the background of images during training with random background images alleviates this issue.
arXiv Detail & Related papers (2022-10-04T13:57:39Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Saliency-driven Class Impressions for Feature Visualization of Deep
Neural Networks [55.11806035788036]
It is advantageous to visualize the features considered to be essential for classification.
Existing visualization methods develop high confidence images consisting of both background and foreground features.
In this work, we propose a saliency-driven approach to visualize discriminative features that are considered most important for a given task.
arXiv Detail & Related papers (2020-07-31T06:11:06Z) - Cross-Identity Motion Transfer for Arbitrary Objects through
Pose-Attentive Video Reassembling [40.20163225821707]
Given a source image and a driving video, our networks animate the subject in the source images according to the motion in the driving video.
In our attention mechanism, dense similarities between the learned keypoints in the source and the driving images are computed.
To reduce the training-testing discrepancy of the self-supervised learning, a novel cross-identity training scheme is additionally introduced.
arXiv Detail & Related papers (2020-07-17T07:21:12Z) - Noise or Signal: The Role of Image Backgrounds in Object Recognition [93.55720207356603]
We create a toolkit for disentangling foreground and background signal on ImageNet images.
We find that (a) models can achieve non-trivial accuracy by relying on the background alone, (b) models often misclassify images even in the presence of correctly classified foregrounds.
arXiv Detail & Related papers (2020-06-17T16:54:43Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.