PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image
Deraining for Semantic Segmentation
- URL: http://arxiv.org/abs/2305.15709v1
- Date: Thu, 25 May 2023 04:44:17 GMT
- Title: PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image
Deraining for Semantic Segmentation
- Authors: Xianghao Jiao, Yaohua Liu, Jiaxin Gao, Xinyuan Chu, Risheng Liu, Xin
Fan
- Abstract summary: We present the first attempt to improve the robustness of semantic segmentation tasks by simultaneously handling different types of degradation factors.
Our approach effectively handles both rain streaks and adversarial perturbation by transferring the robustness of the segmentation model to the image derain model.
As opposed to the commonly used Negative Adversarial Attack (NAA), we design the Auxiliary Mirror Attack (AMA) to introduce positive information prior to the training of the PEARL framework.
- Score: 42.911517493220664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In light of the significant progress made in the development and application
of semantic segmentation tasks, there has been increasing attention towards
improving the robustness of segmentation models against natural degradation
factors (e.g., rain streaks) or artificially attack factors (e.g., adversarial
attack). Whereas, most existing methods are designed to address a single
degradation factor and are tailored to specific application scenarios. In this
work, we present the first attempt to improve the robustness of semantic
segmentation tasks by simultaneously handling different types of degradation
factors. Specifically, we introduce the Preprocessing Enhanced Adversarial
Robust Learning (PEARL) framework based on the analysis of our proposed Naive
Adversarial Training (NAT) framework. Our approach effectively handles both
rain streaks and adversarial perturbation by transferring the robustness of the
segmentation model to the image derain model. Furthermore, as opposed to the
commonly used Negative Adversarial Attack (NAA), we design the Auxiliary Mirror
Attack (AMA) to introduce positive information prior to the training of the
PEARL framework, which improves defense capability and segmentation
performance. Our extensive experiments and ablation studies based on different
derain methods and segmentation models have demonstrated the significant
performance improvement of PEARL with AMA in defense against various
adversarial attacks and rain streaks while maintaining high generalization
performance across different datasets.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Robust VAEs via Generating Process of Noise Augmented Data [9.366139389037489]
This paper introduces a novel framework that enhances robustness by regularizing the latent space divergence between original and noise-augmented data.
Our empirical evaluations demonstrate that this approach, termed Robust Augmented Variational Auto-ENcoder (RAVEN), yields superior performance in resisting adversarial inputs.
arXiv Detail & Related papers (2024-07-26T09:55:34Z) - Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models [7.8245455684263545]
In this work, we aim to enhance ensemble diversity by reducing attack transferability.
We identify second-order gradients, which depict the loss curvature, as a key factor in adversarial robustness.
We introduce a novel regularizer to train multiple more-diverse low-curvature network models.
arXiv Detail & Related papers (2024-03-25T03:44:36Z) - HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs [7.635985143883581]
Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
arXiv Detail & Related papers (2023-12-08T07:32:56Z) - Introducing Foundation Models as Surrogate Models: Advancing Towards
More Practical Adversarial Attacks [15.882687207499373]
No-box adversarial attacks are becoming more practical and challenging for AI systems.
This paper recasts adversarial attack as a downstream task by introducing foundational models as surrogate models.
arXiv Detail & Related papers (2023-07-13T08:10:48Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Pre-training also Transfers Non-Robustness [20.226917627173126]
In spite of its recognized contribution to generalization, pre-training also transfers the non-robustness from pre-trained model into the fine-tuned model.
Results validate the effectiveness in alleviating non-robustness and preserving generalization.
arXiv Detail & Related papers (2021-06-21T11:16:13Z) - Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [106.34722726264522]
A range of adversarial defense techniques have been proposed to mitigate the interference of adversarial noise.
Pre-processing methods may suffer from the robustness degradation effect.
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.
We propose a method called Joint Adversarial Training based Pre-processing (JATP) defense.
arXiv Detail & Related papers (2021-06-10T01:45:32Z) - Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic
Segmentation [79.42338812621874]
Adversarial training is promising for improving robustness of deep neural networks towards adversarial perturbations.
We formulate a general adversarial training procedure that can perform decently on both adversarial and clean samples.
We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect.
arXiv Detail & Related papers (2020-03-14T05:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.