Adversarially robust segmentation models learn perceptually-aligned
gradients
- URL: http://arxiv.org/abs/2204.01099v1
- Date: Sun, 3 Apr 2022 16:04:52 GMT
- Title: Adversarially robust segmentation models learn perceptually-aligned
gradients
- Authors: Pedro Sandoval-Segura
- Abstract summary: We show that adversarially-trained semantic segmentation networks can be used to perform image inpainting and generation.
We argue that perceptually-aligned gradients promote a better understanding of a neural network's learned representations and aid in making neural networks more interpretable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The effects of adversarial training on semantic segmentation networks has not
been thoroughly explored. While previous work has shown that
adversarially-trained image classifiers can be used to perform image synthesis,
we have yet to understand how best to leverage an adversarially-trained
segmentation network to do the same. Using a simple optimizer, we demonstrate
that adversarially-trained semantic segmentation networks can be used to
perform image inpainting and generation. Our experiments demonstrate that
adversarially-trained segmentation networks are more robust and indeed exhibit
perceptually-aligned gradients which help in producing plausible image
inpaintings. We seek to place additional weight behind the hypothesis that
adversarially robust models exhibit gradients that are more
perceptually-aligned with human vision. Through image synthesis, we argue that
perceptually-aligned gradients promote a better understanding of a neural
network's learned representations and aid in making neural networks more
interpretable.
Related papers
- Interpret the Predictions of Deep Networks via Re-Label Distillation [22.2326257454041]
In this work, we propose a re-label distillation approach to learn a direct map from the input to the prediction in a self-supervision manner.
The synthetic images can be annotated into one of two classes by identifying whether their labels shift.
In this manner, these re-labeled synthetic images can well describe the local classification mechanism of the deep network.
arXiv Detail & Related papers (2024-09-20T00:46:22Z) - Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Self-supervised Contrastive Learning for Cross-domain Hyperspectral
Image Representation [26.610588734000316]
This paper introduces a self-supervised learning framework suitable for hyperspectral images that are inherently challenging to annotate.
The proposed framework architecture leverages cross-domain CNN, allowing for learning representations from different hyperspectral images.
The experimental results demonstrate the advantage of the proposed self-supervised representation over models trained from scratch or other transfer learning methods.
arXiv Detail & Related papers (2022-02-08T16:16:45Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Disentangle Perceptual Learning through Online Contrastive Learning [16.534353501066203]
Pursuing realistic results according to human visual perception is the central concern in the image transformation tasks.
In this paper, we argue that, among the features representation from the pre-trained classification network, only limited dimensions are related to human visual perception.
Under such an assumption, we try to disentangle the perception-relevant dimensions from the representation through our proposed online contrastive learning.
arXiv Detail & Related papers (2020-06-24T06:48:38Z) - A Disentangling Invertible Interpretation Network for Explaining Latent
Representations [19.398202091883366]
We formulate interpretation as a translation of hidden representations onto semantic concepts that are comprehensible to the user.
The proposed invertible interpretation network can be transparently applied on top of existing architectures.
We present an efficient approach to define semantic concepts by only sketching two images and also an unsupervised strategy.
arXiv Detail & Related papers (2020-04-27T20:43:20Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z) - Weakly-Supervised Semantic Segmentation by Iterative Affinity Learning [86.45526827323954]
Weakly-supervised semantic segmentation is a challenging task as no pixel-wise label information is provided for training.
We propose an iterative algorithm to learn such pairwise relations.
We show that the proposed algorithm performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2020-02-19T10:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.