Contextual Fusion For Adversarial Robustness
- URL: http://arxiv.org/abs/2011.09526v1
- Date: Wed, 18 Nov 2020 20:13:23 GMT
- Title: Contextual Fusion For Adversarial Robustness
- Authors: Aiswarya Akumalla, Seth Haney, Maksim Bazhenov
- Abstract summary: Deep neural networks are usually designed to process one particular information stream and susceptible to various types of adversarial perturbations.
We developed a fusion model using a combination of background and foreground features extracted in parallel from Places-CNN and Imagenet-CNN.
For gradient based attacks, our results show that fusion allows for significant improvements in classification without decreasing performance on unperturbed data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mammalian brains handle complex reasoning tasks in a gestalt manner by
integrating information from regions of the brain that are specialised to
individual sensory modalities. This allows for improved robustness and better
generalisation ability. In contrast, deep neural networks are usually designed
to process one particular information stream and susceptible to various types
of adversarial perturbations. While many methods exist for detecting and
defending against adversarial attacks, they do not generalise across a range of
attacks and negatively affect performance on clean, unperturbed data. We
developed a fusion model using a combination of background and foreground
features extracted in parallel from Places-CNN and Imagenet-CNN. We tested the
benefits of the fusion approach on preserving adversarial robustness for human
perceivable (e.g., Gaussian blur) and network perceivable (e.g.,
gradient-based) attacks for CIFAR-10 and MS COCO data sets. For gradient based
attacks, our results show that fusion allows for significant improvements in
classification without decreasing performance on unperturbed data and without
need to perform adversarial retraining. Our fused model revealed improvements
for Gaussian blur type perturbations as well. The increase in performance from
fusion approach depended on the variability of the image contexts; larger
increases were seen for classes of images with larger differences in their
contexts. We also demonstrate the effect of regularization to bias the
classifier decision in the presence of a known adversary. We propose that this
biologically inspired approach to integrate information across multiple
modalities provides a new way to improve adversarial robustness that can be
complementary to current state of the art approaches.
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Contextual fusion enhances robustness to image blurring [3.5953590176048458]
Mammalian brains handle complex reasoning by integrating information across brain regions specialized for particular sensory modalities.
We developed a fusion model combining background and foreground features from CNNs trained on Imagenet and Places365.
We tested its robustness to human-perceivable perturbations on MS COCO.
arXiv Detail & Related papers (2024-06-07T17:50:18Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Disentangled Contrastive Collaborative Filtering [36.400303346450514]
Graph contrastive learning (GCL) has exhibited powerful performance in addressing the supervision label shortage issue.
We propose a Disentangled Contrastive Collaborative Filtering framework (DCCF) to realize intent disentanglement with self-supervised augmentation.
Our DCCF is able to not only distill finer-grained latent factors from the entangled self-supervision signals but also alleviate the augmentation-induced noise.
arXiv Detail & Related papers (2023-05-04T11:53:38Z) - Boosting Adversarial Transferability via Fusing Logits of Top-1
Decomposed Feature [36.78292952798531]
We propose a Singular Value Decomposition (SVD)-based feature-level attack method.
Our approach is inspired by the discovery that eigenvectors associated with the larger singular values from the middle layer features exhibit superior generalization and attention properties.
arXiv Detail & Related papers (2023-05-02T12:27:44Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Guided Interpolation for Adversarial Training [73.91493448651306]
As training progresses, the training data becomes less and less attackable, undermining the robustness enhancement.
We propose the guided framework (GIF), which employs the previous epoch's meta information to guide the data's adversarial variants.
Compared with the vanilla mixup, the GIF can provide a higher ratio of attackable data, which is beneficial to the robustness enhancement.
arXiv Detail & Related papers (2021-02-15T03:55:08Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.