A Minimax Approach Against Multi-Armed Adversarial Attacks Detection
- URL: http://arxiv.org/abs/2302.02216v1
- Date: Sat, 4 Feb 2023 18:21:22 GMT
- Title: A Minimax Approach Against Multi-Armed Adversarial Attacks Detection
- Authors: Federica Granese, Marco Romanelli, Siddharth Garg, Pablo Piantanida
- Abstract summary: Multi-armed adversarial attacks have been shown to be highly successful in fooling state-of-the-art detectors.
We propose a solution that aggregates the soft-probability outputs of multiple pre-trained detectors according to a minimax approach.
We show that our aggregation consistently outperforms individual state-of-the-art detectors against multi-armed adversarial attacks.
- Score: 31.971443221041174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-armed adversarial attacks, in which multiple algorithms and objective
loss functions are simultaneously used at evaluation time, have been shown to
be highly successful in fooling state-of-the-art adversarial examples detectors
while requiring no specific side information about the detection mechanism. By
formalizing the problem at hand, we can propose a solution that aggregates the
soft-probability outputs of multiple pre-trained detectors according to a
minimax approach. The proposed framework is mathematically sound, easy to
implement, and modular, allowing for integrating existing or future detectors.
Through extensive evaluation on popular datasets (e.g., CIFAR10 and SVHN), we
show that our aggregation consistently outperforms individual state-of-the-art
detectors against multi-armed adversarial attacks, making it an effective
solution to improve the resilience of available methods.
Related papers
- Learning in Multiple Spaces: Few-Shot Network Attack Detection with Metric-Fused Prototypical Networks [47.18575262588692]
We propose a novel Multi-Space Prototypical Learning framework tailored for few-shot attack detection.
By leveraging Polyak-averaged prototype generation, the framework stabilizes the learning process and effectively adapts to rare and zero-day attacks.
Experimental results on benchmark datasets demonstrate that MSPL outperforms traditional approaches in detecting low-profile and novel attack types.
arXiv Detail & Related papers (2024-12-28T00:09:46Z) - Optimizing Multispectral Object Detection: A Bag of Tricks and Comprehensive Benchmarks [49.84182981950623]
Multispectral object detection, utilizing RGB and TIR (thermal infrared) modalities, is widely recognized as a challenging task.
It requires not only the effective extraction of features from both modalities and robust fusion strategies, but also the ability to address issues such as spectral discrepancies.
We introduce an efficient and easily deployable multispectral object detection framework that can seamlessly optimize high-performing single-modality models.
arXiv Detail & Related papers (2024-11-27T12:18:39Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - Adversarial Examples Detection with Enhanced Image Difference Features
based on Local Histogram Equalization [20.132066800052712]
We propose an adversarial example detection framework based on a high-frequency information enhancement strategy.
This framework can effectively extract and amplify the feature differences between adversarial examples and normal examples.
arXiv Detail & Related papers (2023-05-08T03:14:01Z) - TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion
Attacks against Network Intrusion Detection Systems [0.7829352305480285]
We implement existing state-of-the-art models for intrusion detection.
We then attack those models with a set of chosen evasion attacks.
In an attempt to detect those adversarial attacks, we design and implement multiple transfer learning-based adversarial detectors.
arXiv Detail & Related papers (2022-10-27T18:02:58Z) - MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples
Detectors [24.296350262025552]
We propose a novel framework, called MEAD, for evaluating detectors based on several attack strategies.
Among them, we make use of three new objectives to generate attacks.
The proposed performance metric is based on the worst-case scenario.
arXiv Detail & Related papers (2022-06-30T17:05:45Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Quickest Intruder Detection for Multiple User Active Authentication [74.5256211285431]
We formulate the Multiple-user Quickest Intruder Detection (MQID) algorithm.
We extend the algorithm to the data-efficient scenario where intruder detection is carried out with fewer observation samples.
We evaluate the effectiveness of the proposed method on two publicly available AA datasets on the face modality.
arXiv Detail & Related papers (2020-06-21T21:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.