Curvature Dynamic Black-box Attack: revisiting adversarial robustness via dynamic curvature estimation
- URL: http://arxiv.org/abs/2505.19194v2
- Date: Wed, 30 Jul 2025 17:06:45 GMT
- Title: Curvature Dynamic Black-box Attack: revisiting adversarial robustness via dynamic curvature estimation
- Authors: Peiran Sun,
- Abstract summary: curvature-based approaches have attracted attention because it is assumed that high curvature may give rise to rough decision boundary.<n>We propose a new query-efficient method, dynamic curvature estimation, to estimate the decision boundary curvature in a black-box setting.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attack reveals the vulnerability of deep learning models. For about a decade, countless attack and defense methods have been proposed, leading to robustified classifiers and better understanding of models. Among these methods, curvature-based approaches have attracted attention because it is assumed that high curvature may give rise to rough decision boundary. However, the most commonly used \textit{curvature} is the curvature of loss function, scores or other parameters from within the model as opposed to decision boundary curvature, since the former can be relatively easily formed using second order derivative. In this paper, we propose a new query-efficient method, dynamic curvature estimation(DCE), to estimate the decision boundary curvature in a black-box setting. Our approach is based on CGBA, a black-box adversarial attack. By performing DCE on a wide range of classifiers, we discovered, statistically, a connection between decision boundary curvature and adversarial robustness. We also propose a new attack method, curvature dynamic black-box attack(CDBA) with improved performance using the dynamically estimated curvature.
Related papers
- On the Benefits of Accelerated Optimization in Robust and Private Estimation [2.209921757303168]
We study the advantages of accelerated gradient methods, specifically based on the Frank-Wolfe method and projected descent.<n>For the Frank-Wolfe method, our technique is based on a tailored iteration learning rate and a uniform lower bound on the gradient of the $ell$-norm over the constraint set.<n>For accelerating projected descent, we use the popular variant based on Nesterov's momentum.
arXiv Detail & Related papers (2025-06-03T16:26:30Z) - Hard-Label Black-Box Attacks on 3D Point Clouds [66.52447238776482]
We introduce a novel 3D attack method based on a new spectrum-aware decision boundary algorithm to generate high-quality adversarial samples.<n>Experiments demonstrate that our attack competitively outperforms existing white/black-box attackers in terms of attack performance and adversary quality.
arXiv Detail & Related papers (2024-11-30T09:05:02Z) - Revisiting Edge Perturbation for Graph Neural Network in Graph Data
Augmentation and Attack [58.440711902319855]
Edge perturbation is a method to modify graph structures.
It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs)
We propose a unified formulation and establish a clear boundary between two categories of edge perturbation methods.
arXiv Detail & Related papers (2024-03-10T15:50:04Z) - CGBA: Curvature-aware Geometric Black-box Attack [39.63633212337113]
Decision-based black-box attacks often necessitate a large number of queries to craft an adversarial example.
We propose a novel query-efficient curvature-aware geometric decision-based black-box attack (CGBA)
We develop a new query-efficient variant, CGBA-H, that is adapted for the targeted attack.
arXiv Detail & Related papers (2023-08-06T17:18:04Z) - Dynamic ensemble selection based on Deep Neural Network Uncertainty
Estimation for Adversarial Robustness [7.158144011836533]
This work explore the dynamic attributes in model level through dynamic ensemble selection technology.
In training phase the Dirichlet distribution is apply as prior of sub-models' predictive distribution, and the diversity constraint in parameter space is introduced.
In test phase, the certain sub-models are dynamically selected based on their rank of uncertainty value for the final prediction.
arXiv Detail & Related papers (2023-08-01T07:41:41Z) - Exploring and Exploiting Decision Boundary Dynamics for Adversarial
Robustness [59.948529997062586]
It is unclear whether existing robust training methods effectively increase the margin for each vulnerable point during training.
We propose a continuous-time framework for quantifying the relative speed of the decision boundary with respect to each individual point.
We propose Dynamics-aware Robust Training (DyART), which encourages the decision boundary to engage in movement that prioritizes increasing smaller margins.
arXiv Detail & Related papers (2023-02-06T18:54:58Z) - Safe Screening for Sparse Conditional Random Fields [13.563686294946745]
We propose a novel safe dynamic screening method to identify and remove irrelevant features during the training process.
Our method is also the first screening method in sparse CRFs and even structure prediction models.
Experimental results on both synthetic and real-world datasets demonstrate that the speedup gained by our method is significant.
arXiv Detail & Related papers (2021-11-27T18:38:57Z) - Finding Optimal Tangent Points for Reducing Distortions of Hard-label
Attacks [36.24260738965947]
We propose a novel geometric-based approach called Tangent Attack (TA)
Tangent Attack identifies an optimal tangent point of a virtual hemisphere located on the decision boundary to reduce the distortion of the attack.
Experiments conducted on the ImageNet and CIFAR-10 datasets demonstrate that our approach can consume only a small number of queries to achieve the low-magnitude distortion.
arXiv Detail & Related papers (2021-11-15T01:51:37Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial
Attacks [86.88061841975482]
We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle.
We use this setting to find fast one-step adversarial attacks, akin to a black-box version of the Fast Gradient Sign Method(FGSM)
We show that the method uses fewer queries and achieves higher attack success rates than the current state of the art.
arXiv Detail & Related papers (2020-10-08T18:36:51Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z) - Curvature Regularized Surface Reconstruction from Point Cloud [4.389913383268497]
We propose a variational functional and fast algorithms to reconstruct implicit surface from point cloud data with a curvature constraint.
The proposed method shows against noise, and recovers concave features and sharp corners better compared to models without curvature constraint.
arXiv Detail & Related papers (2020-01-22T05:34:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.