Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
- URL: http://arxiv.org/abs/2111.01996v3
- Date: Sun, 23 Jun 2024 19:26:55 GMT
- Title: Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
- Authors: Ke Sun, Mingjie Li, Zhouchen Lin,
- Abstract summary: We design strategies to achieve universal adversarial robustness.
To the best of our knowledge, we are the first to consider universal adversarial robustness via multi-objective optimization.
- Score: 53.4380239739108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial robustness, which primarily comprises sensitivity-based robustness and spatial robustness, plays an integral part in achieving robust generalization. In this paper, we endeavor to design strategies to achieve universal adversarial robustness. To achieve this, we first investigate the relatively less-explored realm of spatial robustness. Then, we integrate the existing spatial robustness methods by incorporating both local and global spatial vulnerability into a unified spatial attack and adversarial training approach. Furthermore, we present a comprehensive relationship between natural accuracy, sensitivity-based robustness, and spatial robustness, supported by strong evidence from the perspective of robust representation. Crucially, to reconcile the interplay between the mutual impacts of various robustness components into one unified framework, we incorporate the \textit{Pareto criterion} into the adversarial robustness analysis, yielding a novel strategy called Pareto Adversarial Training for achieving universal robustness. The resulting Pareto front, which delineates the set of optimal solutions, provides an optimal balance between natural accuracy and various adversarial robustness. This sheds light on solutions for achieving universal robustness in the future. To the best of our knowledge, we are the first to consider universal adversarial robustness via multi-objective optimization.
Related papers
- Does simple trump complex? Comparing strategies for adversarial robustness in DNNs [3.6130723421895947]
Deep Neural Networks (DNNs) have shown substantial success in various applications but remain vulnerable to adversarial attacks.<n>This study aims to identify and isolate the components of two different adversarial training techniques that contribute most to increased adversarial robustness.
arXiv Detail & Related papers (2025-08-25T13:33:38Z) - Adversarial Robustness for Unified Multi-Modal Encoders via Efficient Calibration [12.763688592842717]
We present the first comprehensive study of adversarial vulnerability in unified multi-modal encoders.<n>Non-visual inputs, such as audio and point clouds, are especially fragile.<n>Our method improves adversarial robustness by up to 47.3 percent at epsilon = 4/255.
arXiv Detail & Related papers (2025-05-17T08:26:04Z) - Benchmarking the Spatial Robustness of DNNs via Natural and Adversarial Localized Corruptions [49.546479320670464]
This paper introduces specialized metrics for benchmarking the spatial robustness of segmentation models.
We propose region-aware multi-attack adversarial analysis, a method that enables a deeper understanding of model robustness.
The results reveal that models respond to these two types of threats differently.
arXiv Detail & Related papers (2025-04-02T11:37:39Z) - Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume [17.198794644483026]
We propose a new metric termed adversarial hypervolume, assessing the robustness of deep learning models comprehensively over a range of perturbation intensities.
We adopt a novel training algorithm that enhances adversarial robustness uniformly across various perturbation intensities.
This research contributes a new measure of robustness and establishes a standard for assessing benchmarking and the resilience of current and future defensive models against adversarial threats.
arXiv Detail & Related papers (2024-03-08T07:03:18Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - How many perturbations break this model? Evaluating robustness beyond
adversarial accuracy [28.934863462633636]
We introduce adversarial sparsity, which quantifies how difficult it is to find a successful perturbation given both an input point and a constraint on the direction of the perturbation.
We show that sparsity provides valuable insight into neural networks in multiple ways.
arXiv Detail & Related papers (2022-07-08T21:25:17Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Balancing Robustness and Sensitivity using Feature Contrastive Learning [95.86909855412601]
Methods that promote robustness can hurt the model's sensitivity to rare or underrepresented patterns.
We propose Feature Contrastive Learning (FCL) that encourages a model to be more sensitive to the features that have higher contextual utility.
arXiv Detail & Related papers (2021-05-19T20:53:02Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z) - Improving Ensemble Robustness by Collaboratively Promoting and Demoting
Adversarial Robustness [19.8818435601131]
Ensemble-based adversarial training is a principled approach to achieve robustness against adversarial attacks.
We propose in this work a simple yet effective strategy to collaborate among committee models of an ensemble model.
Our proposed framework provides the flexibility to reduce the adversarial transferability as well as to promote the diversity of ensemble members.
arXiv Detail & Related papers (2020-09-21T04:54:38Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.