An Empirical Study of Derivative-Free-Optimization Algorithms for
Targeted Black-Box Attacks in Deep Neural Networks
- URL: http://arxiv.org/abs/2012.01901v1
- Date: Thu, 3 Dec 2020 13:32:20 GMT
- Title: An Empirical Study of Derivative-Free-Optimization Algorithms for
Targeted Black-Box Attacks in Deep Neural Networks
- Authors: Giuseppe Ughi, Vinayak Abrol, Jared Tanner
- Abstract summary: This paper considers four pre-existing state-of-the-art DFO-based algorithms along with the introduction of a new algorithm built on BOBYQA.
We compare these algorithms in a variety of settings according to the fraction of images that they successfully misclassify.
Experiments disclose how the likelihood of finding an adversarial example depends on both the algorithm used and the setting of the attack.
- Score: 8.368543987898732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We perform a comprehensive study on the performance of derivative free
optimization (DFO) algorithms for the generation of targeted black-box
adversarial attacks on Deep Neural Network (DNN) classifiers assuming the
perturbation energy is bounded by an $\ell_\infty$ constraint and the number of
queries to the network is limited. This paper considers four pre-existing
state-of-the-art DFO-based algorithms along with the introduction of a new
algorithm built on BOBYQA, a model-based DFO method. We compare these
algorithms in a variety of settings according to the fraction of images that
they successfully misclassify given a maximum number of queries to the DNN.
The experiments disclose how the likelihood of finding an adversarial example
depends on both the algorithm used and the setting of the attack; algorithms
limiting the search of adversarial example to the vertices of the $\ell^\infty$
constraint work particularly well without structural defenses, while the
presented BOBYQA based algorithm works better for especially small perturbation
energies. This variance in performance highlights the importance of new
algorithms being compared to the state-of-the-art in a variety of settings, and
the effectiveness of adversarial defenses being tested using as wide a range of
algorithms as possible.
Related papers
- Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits [55.938644481736446]
Indexed Minimum Empirical Divergence (IMED) is a highly effective approach to the multi-armed bandit problem.
It has been observed to empirically outperform UCB-based algorithms and Thompson Sampling.
We present novel linear versions of the IMED algorithm, which we call the family of LinIMED algorithms.
arXiv Detail & Related papers (2024-05-24T04:11:58Z) - Unfolded proximal neural networks for robust image Gaussian denoising [7.018591019975253]
We propose a unified framework to build PNNs for the Gaussian denoising task, based on both the dual-FB and the primal-dual Chambolle-Pock algorithms.
We also show that accelerated versions of these algorithms enable skip connections in the associated NN layers.
arXiv Detail & Related papers (2023-08-06T15:32:16Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Identifying Co-Adaptation of Algorithmic and Implementational
Innovations in Deep Reinforcement Learning: A Taxonomy and Case Study of
Inference-based Algorithms [15.338931971492288]
We focus on a series of inference-based actor-critic algorithms to decouple their algorithmic innovations and implementation decisions.
We identify substantial performance drops whenever implementation details are mismatched for algorithmic choices.
Results show which implementation details are co-adapted and co-evolved with algorithms.
arXiv Detail & Related papers (2021-03-31T17:55:20Z) - Benchmarking Simulation-Based Inference [5.3898004059026325]
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods.
We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms.
We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency.
arXiv Detail & Related papers (2021-01-12T18:31:22Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - A Model-Based Derivative-Free Approach to Black-Box Adversarial
Examples: BOBYQA [10.667476482648834]
We show that model-based derivative free optimisation algorithms can generate adversarial targeted misclassification of deep networks using fewer network queries than non-model-based methods.
We show that the number of networks queries is less impacted by making the task more challenging either through reducing the allowed $ellinfty$ perturbation energy or training the network with defences against adversarial misclassification.
arXiv Detail & Related papers (2020-02-24T16:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.