A black-box adversarial attack for poisoning clustering
- URL: http://arxiv.org/abs/2009.05474v4
- Date: Wed, 10 Nov 2021 15:34:58 GMT
- Title: A black-box adversarial attack for poisoning clustering
- Authors: Antonio Emanuele Cin\`a, Alessandro Torcinovich, Marcello Pelillo
- Abstract summary: We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
- Score: 78.19784577498031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clustering algorithms play a fundamental role as tools in decision-making and
sensible automation processes. Due to the widespread use of these applications,
a robustness analysis of this family of algorithms against adversarial noise
has become imperative. To the best of our knowledge, however, only a few works
have currently addressed this problem. In an attempt to fill this gap, in this
work, we propose a black-box adversarial attack for crafting adversarial
samples to test the robustness of clustering algorithms. We formulate the
problem as a constrained minimization program, general in its structure and
customizable by the attacker according to her capability constraints. We do not
assume any information about the internal structure of the victim clustering
algorithm, and we allow the attacker to query it as a service only. In the
absence of any derivative information, we perform the optimization with a
custom approach inspired by the Abstract Genetic Algorithm (AGA). In the
experimental part, we demonstrate the sensibility of different single and
ensemble clustering algorithms against our crafted adversarial samples on
different scenarios. Furthermore, we perform a comparison of our algorithm with
a state-of-the-art approach showing that we are able to reach or even
outperform its performance. Finally, to highlight the general nature of the
generated noise, we show that our attacks are transferable even against
supervised algorithms such as SVMs, random forests, and neural networks.
Related papers
- Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - Fairness Degrading Adversarial Attacks Against Clustering Algorithms [35.40427659749882]
We propose a fairness degrading attack algorithm for k-median clustering.
We find that the addition of the generated adversarial samples can lead to significantly lower fairness values.
arXiv Detail & Related papers (2021-10-22T19:10:27Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - Adversarial Robustness of Streaming Algorithms through Importance
Sampling [29.957317489789745]
We introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks.
Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness.
We empirically confirm the robustness of our algorithms on various adversarial attacks and demonstrate that by contrast, some common existing algorithms are not robust.
arXiv Detail & Related papers (2021-06-28T19:24:11Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Automated Decision-based Adversarial Attacks [48.01183253407982]
We consider the practical and challenging decision-based black-box adversarial setting.
Under this setting, the attacker can only acquire the final classification labels by querying the target model.
We propose to automatically discover decision-based adversarial attack algorithms.
arXiv Detail & Related papers (2021-05-09T13:15:10Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - Learning Robust Algorithms for Online Allocation Problems Using
Adversarial Training [10.14260510961573]
We address the challenge of finding algorithms for online allocation (i.e. bipartite matching) using a machine learning approach.
In this paper, we focus on the AdWords problem, which is a classical online budgeted matching problem of both theoretical and practical significance.
arXiv Detail & Related papers (2020-10-16T14:33:11Z) - Differentially Private k-Means Clustering with Guaranteed Convergence [5.335316436366718]
Iterative clustering algorithms help us to learn the insights behind the data.
It may allow adversaries to infer the privacy of individuals with some background knowledge.
To protect individual privacy against such an inference attack, preserving differential privacy (DP) for the iterative clustering algorithms has been extensively studied.
arXiv Detail & Related papers (2020-02-03T22:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.