A Model-Based Derivative-Free Approach to Black-Box Adversarial
Examples: BOBYQA
- URL: http://arxiv.org/abs/2002.10349v1
- Date: Mon, 24 Feb 2020 16:23:09 GMT
- Title: A Model-Based Derivative-Free Approach to Black-Box Adversarial
Examples: BOBYQA
- Authors: Giuseppe Ughi, Vinayak Abrol, Jared Tanner
- Abstract summary: We show that model-based derivative free optimisation algorithms can generate adversarial targeted misclassification of deep networks using fewer network queries than non-model-based methods.
We show that the number of networks queries is less impacted by making the task more challenging either through reducing the allowed $ellinfty$ perturbation energy or training the network with defences against adversarial misclassification.
- Score: 10.667476482648834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate that model-based derivative free optimisation algorithms can
generate adversarial targeted misclassification of deep networks using fewer
network queries than non-model-based methods. Specifically, we consider the
black-box setting, and show that the number of networks queries is less
impacted by making the task more challenging either through reducing the
allowed $\ell^{\infty}$ perturbation energy or training the network with
defences against adversarial misclassification. We illustrate this by
contrasting the BOBYQA algorithm with the state-of-the-art model-free
adversarial targeted misclassification approaches based on genetic,
combinatorial, and direct-search algorithms. We observe that for high
$\ell^{\infty}$ energy perturbations on networks, the aforementioned simpler
model-free methods require the fewest queries. In contrast, the proposed BOBYQA
based method achieves state-of-the-art results when the perturbation energy
decreases, or if the network is trained against adversarial perturbations.
Related papers
- A network-constrain Weibull AFT model for biomarkers discovery [0.0]
AFTNet is a network-constraint survival analysis method based on the Weibull accelerated failure time (AFT) model.
We present an efficient iterative computational algorithm based on the proximal descent gradient method.
arXiv Detail & Related papers (2024-02-28T11:12:53Z) - Chaos Theory and Adversarial Robustness [0.0]
This paper uses ideas from Chaos Theory to explain, analyze, and quantify the degree to which neural networks are susceptible to or robust against adversarial attacks.
We present a new metric, the "susceptibility ratio," given by $hat Psi(h, theta)$, which captures how greatly a model's output will be changed by perturbations to a given input.
arXiv Detail & Related papers (2022-10-20T03:39:44Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Revisit Geophysical Imaging in A New View of Physics-informed Generative
Adversarial Learning [2.12121796606941]
Full waveform inversion produces high-resolution subsurface models.
FWI with least-squares function suffers from many drawbacks such as the local-minima problem.
Recent works relying on partial differential equations and neural networks show promising performance for two-dimensional FWI.
We propose an unsupervised learning paradigm that integrates wave equation with a discriminate network to accurately estimate the physically consistent models.
arXiv Detail & Related papers (2021-09-23T15:54:40Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - An Empirical Study of Derivative-Free-Optimization Algorithms for
Targeted Black-Box Attacks in Deep Neural Networks [8.368543987898732]
This paper considers four pre-existing state-of-the-art DFO-based algorithms along with the introduction of a new algorithm built on BOBYQA.
We compare these algorithms in a variety of settings according to the fraction of images that they successfully misclassify.
Experiments disclose how the likelihood of finding an adversarial example depends on both the algorithm used and the setting of the attack.
arXiv Detail & Related papers (2020-12-03T13:32:20Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.