Adversarial Markov Games: On Adaptive Decision-Based Attacks and
Defenses
- URL: http://arxiv.org/abs/2312.13435v1
- Date: Wed, 20 Dec 2023 21:24:52 GMT
- Title: Adversarial Markov Games: On Adaptive Decision-Based Attacks and
Defenses
- Authors: Ilias Tsingenopoulos, Vera Rimmer, Davy Preuveneers, Fabio Pierazzi,
Lorenzo Cavallaro, Wouter Joosen
- Abstract summary: We show how attacks but also defenses can benefit by it and by learning from each other through interaction.
We demonstrate that active defenses, which control how the system responds, are a necessary complement to model hardening when facing decision-based attacks.
We lay out effective strategies in ensuring the robustness of ML-based systems deployed in the real-world.
- Score: 23.056260309055283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite considerable efforts on making them robust, real-world ML-based
systems remain vulnerable to decision based attacks, as definitive proofs of
their operational robustness have so far proven intractable. The canonical
approach in robustness evaluation calls for adaptive attacks, that is with
complete knowledge of the defense and tailored to bypass it. In this study, we
introduce a more expansive notion of being adaptive and show how attacks but
also defenses can benefit by it and by learning from each other through
interaction. We propose and evaluate a framework for adaptively optimizing
black-box attacks and defenses against each other through the competitive game
they form. To reliably measure robustness, it is important to evaluate against
realistic and worst-case attacks. We thus augment both attacks and the evasive
arsenal at their disposal through adaptive control, and observe that the same
can be done for defenses, before we evaluate them first apart and then jointly
under a multi-agent perspective. We demonstrate that active defenses, which
control how the system responds, are a necessary complement to model hardening
when facing decision-based attacks; then how these defenses can be circumvented
by adaptive attacks, only to finally elicit active and adaptive defenses. We
validate our observations through a wide theoretical and empirical
investigation to confirm that AI-enabled adversaries pose a considerable threat
to black-box ML-based systems, rekindling the proverbial arms race where
defenses have to be AI-enabled too. Succinctly, we address the challenges posed
by adaptive adversaries and develop adaptive defenses, thereby laying out
effective strategies in ensuring the robustness of ML-based systems deployed in
the real-world.
Related papers
- A Novel Approach to Guard from Adversarial Attacks using Stable Diffusion [0.0]
Our proposal suggests a different approach to the AI Guardian framework.
Instead of including adversarial examples in the training process, we propose training the AI system without them.
This aims to create a system that is inherently resilient to a wider range of attacks.
arXiv Detail & Related papers (2024-05-03T04:08:15Z) - Embodied Adversarial Attack: A Dynamic Robust Physical Attack in
Autonomous Driving [15.427248934229233]
Embodied Adversarial Attack (EAA) aims to employ the paradigm of embodied intelligence: Perception-Decision-Control.
EAA adopts the laser-a highly manipulable medium to implement physical attacks, and further trains an attack agent with reinforcement learning to make it capable of instantaneously determining the best attack strategy.
A variety of experiments verify the high effectiveness of our method under complex scenes.
arXiv Detail & Related papers (2023-12-15T06:16:17Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - A Random-patch based Defense Strategy Against Physical Attacks for Face
Recognition Systems [3.6202815454709536]
We propose a random-patch based defense strategy to robustly detect physical attacks for Face Recognition System (FRS)
Our method can be easily applied to the real world face recognition system and extended to other defense methods to boost the detection performance.
arXiv Detail & Related papers (2023-04-16T16:11:56Z) - Ares: A System-Oriented Wargame Framework for Adversarial ML [3.197282271064602]
Ares is an evaluation framework for adversarial ML that allows researchers to explore attacks and defenses in a realistic wargame-like environment.
Ares frames the conflict between the attacker and defender as two agents in a reinforcement learning environment with opposing objectives.
This allows the introduction of system-level evaluation metrics such as time to failure and evaluation of complex strategies.
arXiv Detail & Related papers (2022-10-24T04:55:18Z) - Scale-Invariant Adversarial Attack for Evaluating and Enhancing
Adversarial Defenses [22.531976474053057]
Projected Gradient Descent (PGD) attack has been demonstrated to be one of the most successful adversarial attacks.
We propose Scale-Invariant Adversarial Attack (SI-PGD), which utilizes the angle between the features in the penultimate layer and the weights in the softmax layer to guide the generation of adversaries.
arXiv Detail & Related papers (2022-01-29T08:40:53Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - On Adaptive Attacks to Adversarial Example Defenses [123.32678153377915]
This paper lays out the methodology and the approach necessary to perform an adaptive attack against defenses to adversarial examples.
We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples.
arXiv Detail & Related papers (2020-02-19T18:50:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.