Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS
- URL: http://arxiv.org/abs/2111.12197v1
- Date: Tue, 23 Nov 2021 23:42:16 GMT
- Title: Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS
- Authors: Christian Schroeder de Witt, Yongchao Huang, Philip H.S. Torr, Martin
Strohmeier
- Abstract summary: We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
- Score: 70.60975663021952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cyber attacks are increasing in volume, frequency, and complexity. In
response, the security community is looking toward fully automating cyber
defense systems using machine learning. However, so far the resultant effects
on the coevolutionary dynamics of attackers and defenders have not been
examined. In this whitepaper, we hypothesise that increased automation on both
sides will accelerate the coevolutionary cycle, thus begging the question of
whether there are any resultant fixed points, and how they are characterised.
Working within the threat model of Locked Shields, Europe's largest
cyberdefense exercise, we study blackbox adversarial attacks on network
classifiers. Given already existing attack capabilities, we question the
utility of optimal evasion attack frameworks based on minimal evasion
distances. Instead, we suggest a novel reinforcement learning setting that can
be used to efficiently generate arbitrary adversarial perturbations. We then
argue that attacker-defender fixed points are themselves general-sum games with
complex phase transitions, and introduce a temporally extended multi-agent
reinforcement learning framework in which the resultant dynamics can be
studied. We hypothesise that one plausible fixed point of AI-NIDS may be a
scenario where the defense strategy relies heavily on whitelisted feature flow
subspaces. Finally, we demonstrate that a continual learning approach is
required to study attacker-defender dynamics in temporally extended general-sum
games.
Related papers
- Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks [11.830908033835728]
We argue that overly permissive attack and overly restrictive defensive threat models have hampered defense development in the ML domain.
We analyze adversarial machine learning from a system security perspective rather than an AI perspective.
arXiv Detail & Related papers (2024-10-15T21:33:23Z) - A Novel Approach to Guard from Adversarial Attacks using Stable Diffusion [0.0]
Our proposal suggests a different approach to the AI Guardian framework.
Instead of including adversarial examples in the training process, we propose training the AI system without them.
This aims to create a system that is inherently resilient to a wider range of attacks.
arXiv Detail & Related papers (2024-05-03T04:08:15Z) - Use of Graph Neural Networks in Aiding Defensive Cyber Operations [2.1874189959020427]
Graph Neural Networks have emerged as a promising approach for enhancing the effectiveness of defensive measures.
We look into the application of GNNs in aiding to break each stage of one of the most renowned attack life cycles, the Lockheed Martin Cyber Kill Chain.
arXiv Detail & Related papers (2024-01-11T05:56:29Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Towards Evaluating the Robustness of Neural Networks Learned by
Transduction [44.189248766285345]
Greedy Model Space Attack (GMSA) is an attack framework that can serve as a new baseline for evaluating transductive-learning based defenses.
We show that GMSA, even with weak instantiations, can break previous transductive-learning based defenses.
arXiv Detail & Related papers (2021-10-27T19:39:50Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.