Semantically Adversarial Learnable Filters
- URL: http://arxiv.org/abs/2008.06069v3
- Date: Tue, 5 Apr 2022 21:03:21 GMT
- Title: Semantically Adversarial Learnable Filters
- Authors: Ali Shahin Shamsabadi, Changjae Oh, Andrea Cavallaro
- Abstract summary: The proposed framework combines a structure loss and a semantic adversarial loss in a multi-task objective function to train a fully convolutional neural network.
The structure loss helps generate perturbations whose type and magnitude are defined by a target image processing filter.
The semantic adversarial loss considers groups of (semantic) labels to craft perturbations that prevent the filtered image from being classified with a label in the same group.
- Score: 53.3223426679514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an adversarial framework to craft perturbations that mislead
classifiers by accounting for the image content and the semantics of the
labels. The proposed framework combines a structure loss and a semantic
adversarial loss in a multi-task objective function to train a fully
convolutional neural network. The structure loss helps generate perturbations
whose type and magnitude are defined by a target image processing filter. The
semantic adversarial loss considers groups of (semantic) labels to craft
perturbations that prevent the filtered image {from} being classified with a
label in the same group. We validate our framework with three different target
filters, namely detail enhancement, log transformation and gamma correction
filters; and evaluate the adversarially filtered images against three
classifiers, ResNet50, ResNet18 and AlexNet, pre-trained on ImageNet. We show
that the proposed framework generates filtered images with a high success rate,
robustness, and transferability to unseen classifiers. We also discuss
objective and subjective evaluations of the adversarial perturbations.
Related papers
- Towards Image Semantics and Syntax Sequence Learning [8.033697392628424]
We introduce the concept of "image grammar", consisting of "image semantics" and "image syntax"
We propose a weakly supervised two-stage approach to learn the image grammar relative to a class of visual objects/scenes.
Our framework is trained to reason over patch semantics and detect faulty syntax.
arXiv Detail & Related papers (2024-01-31T00:16:02Z) - Dual Structure-Aware Image Filterings for Semi-supervised Medical Image Segmentation [11.663088388838073]
We propose novel dual structure-aware image filterings (DSAIF) as the image-level variations for semi-supervised medical image segmentation.
Motivated by connected filtering that simplifies image via filtering in structure-aware tree-based image representation, we resort to the dual contrast invariant Max-tree and Min-tree representation.
Applying the proposed DSAIF to mutually supervised networks decreases the consensus of their erroneous predictions on unlabeled images.
arXiv Detail & Related papers (2023-12-12T13:44:53Z) - Counterfactual Image Generation for adversarially robust and
interpretable Classifiers [1.3859669037499769]
We propose a unified framework leveraging image-to-image translation Generative Adrial Networks (GANs) to produce counterfactual samples.
This is achieved by combining the classifier and discriminator into a single model that attributes real images to their respective classes and flags generated images as "fake"
We show how the model exhibits improved robustness to adversarial attacks, and we show how the discriminator's "fakeness" value serves as an uncertainty measure of the predictions.
arXiv Detail & Related papers (2023-10-01T18:50:29Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Collaborative Group: Composed Image Retrieval via Consensus Learning from Noisy Annotations [67.92679668612858]
We propose the Consensus Network (Css-Net), inspired by the psychological concept that groups outperform individuals.
Css-Net comprises two core components: (1) a consensus module with four diverse compositors, each generating distinct image-text embeddings; and (2) a Kullback-Leibler divergence loss that encourages learning of inter-compositor interactions.
On benchmark datasets, particularly FashionIQ, Css-Net demonstrates marked improvements. Notably, it achieves significant recall gains, with a 2.77% increase in R@10 and 6.67% boost in R@50, underscoring its
arXiv Detail & Related papers (2023-06-03T11:50:44Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Probabilistic Warp Consistency for Weakly-Supervised Semantic
Correspondences [118.6018141306409]
We propose Probabilistic Warp Consistency, a weakly-supervised learning objective for semantic matching.
We first construct an image triplet by applying a known warp to one of the images in a pair depicting different instances of the same object class.
Our objective also brings substantial improvements in the strongly-supervised regime, when combined with keypoint annotations.
arXiv Detail & Related papers (2022-03-08T18:55:11Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Convolutional Neural Networks from Image Markers [62.997667081978825]
Feature Learning from Image Markers (FLIM) was recently proposed to estimate convolutional filters, with no backpropagation, from strokes drawn by a user on very few images.
This paper extends FLIM for fully connected layers and demonstrates it on different image classification problems.
The results show that FLIM-based convolutional neural networks can outperform the same architecture trained from scratch by backpropagation.
arXiv Detail & Related papers (2020-12-15T22:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.