Online Adversarial Attacks
- URL: http://arxiv.org/abs/2103.02014v1
- Date: Tue, 2 Mar 2021 20:36:04 GMT
- Title: Online Adversarial Attacks
- Authors: Andjela Mladenovic, Avishek Joey Bose, Hugo Berard, William L.
Hamilton, Simon Lacoste-Julien, Pascal Vincent, Gauthier Gidel
- Abstract summary: We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
- Score: 57.448101834579624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks expose important vulnerabilities of deep learning models,
yet little attention has been paid to settings where data arrives as a stream.
In this paper, we formalize the online adversarial attack problem, emphasizing
two key elements found in real-world use-cases: attackers must operate under
partial knowledge of the target model, and the decisions made by the attacker
are irrevocable since they operate on a transient data stream. We first
rigorously analyze a deterministic variant of the online threat model by
drawing parallels to the well-studied $k$-\textit{secretary problem} and
propose \algoname, a simple yet practical algorithm yielding a provably better
competitive ratio for $k=2$ over the current best single threshold algorithm.
We also introduce the \textit{stochastic $k$-secretary} -- effectively reducing
online blackbox attacks to a $k$-secretary problem under noise -- and prove
theoretical bounds on the competitive ratios of \textit{any} online algorithms
adapted to this setting. Finally, we complement our theoretical results by
conducting a systematic suite of experiments on MNIST and CIFAR-10 with both
vanilla and robust classifiers, revealing that, by leveraging online secretary
algorithms, like \algoname, we can get an online attack success rate close to
the one achieved by the optimal offline solution.
Related papers
- Discriminative Adversarial Unlearning [40.30974185546541]
We introduce a novel machine unlearning framework founded upon the established principles of the min-max optimization paradigm.
We capitalize on the capabilities of strong Membership Inference Attacks (MIA) to facilitate the unlearning of specific samples from a trained model.
Our proposed algorithm closely approximates the ideal benchmark of retraining from scratch for both random sample forgetting and class-wise forgetting schemes.
arXiv Detail & Related papers (2024-02-10T03:04:57Z) - Wasserstein distributional robustness of neural networks [9.79503506460041]
Deep neural networks are known to be vulnerable to adversarial attacks (AA)
For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified.
We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions.
arXiv Detail & Related papers (2023-06-16T13:41:24Z) - Adversarial Attacks on Online Learning to Rank with Click Feedback [18.614785011987756]
Online learning to rank is a sequential decision-making problem where a learning agent selects an ordered list of items and receives feedback through user clicks.
This paper studies attack strategies against multiple variants of OLTR.
We propose a general attack strategy against any algorithm under the general click model.
arXiv Detail & Related papers (2023-05-26T16:28:26Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Adversarial Deep Learning for Online Resource Allocation [12.118811903399951]
We use deep neural networks to learn an online algorithm for a resource allocation and pricing problem from scratch.
Our work is the first using deep neural networks to design an online algorithm from the perspective of worst-case performance guarantee.
arXiv Detail & Related papers (2021-11-19T15:48:43Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.