Defending Against Intelligent Attackers at Large Scales
- URL: http://arxiv.org/abs/2504.18577v1
- Date: Tue, 22 Apr 2025 21:47:19 GMT
- Title: Defending Against Intelligent Attackers at Large Scales
- Authors: Andrew J. Lohn,
- Abstract summary: We investigate the scale of attack and defense mathematically in the context of AI's possible effect on cybersecurity.<n>We find that small increases in the number or quality of defenses can compensate for exponential increases in the number of independent attacks and for exponential speedups.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the scale of attack and defense mathematically in the context of AI's possible effect on cybersecurity. For a given target today, highly scaled cyber attacks such as from worms or botnets typically all fail or all succeed. Here, we consider the effect of scale if those attack agents were intelligent and creative enough to act independently such that each attack attempt was different from the others or such that attackers could learn from their successes and failures. We find that small increases in the number or quality of defenses can compensate for exponential increases in the number of independent attacks and for exponential speedups.
Related papers
- Can Go AIs be adversarially robust? [4.466856575755327]
We study whether adding natural countermeasures can achieve robustness in Go.<n>We find that though some of these defenses protect against previously discovered attacks, none withstand freshly trained adversaries.<n>Our results suggest that building robust AI systems is challenging even with extremely superhuman systems in some of the most tractable settings.
arXiv Detail & Related papers (2024-06-18T17:57:49Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - A Game-Theoretic Approach for AI-based Botnet Attack Defence [5.020067709306813]
New generation of botnets leverage Artificial Intelligent (AI) techniques to conceal the identity of botmasters and the attack intention to avoid detection.
There has not been an existing assessment tool capable of evaluating the effectiveness of existing defense strategies against this kind of AI-based botnet attack.
We propose a sequential game theory model that is capable to analyse the details of the potential strategies botnet attackers and defenders could use to reach Nash Equilibrium (NE)
arXiv Detail & Related papers (2021-12-04T02:53:40Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - Mitigating Advanced Adversarial Attacks with More Advanced Gradient
Obfuscation Techniques [13.972753012322126]
Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversarial Examples (AEs)
Recently, advanced gradient-based attack techniques were proposed.
In this paper, we make a steady step towards mitigating those advanced gradient-based attacks.
arXiv Detail & Related papers (2020-05-27T23:42:25Z) - Challenges in Forecasting Malicious Events from Incomplete Data [6.656003516101928]
Researchers have attempted to combine external data with machine learning algorithms to learn indicators of impending cyber-attacks.
But successful cyber-attacks represent a tiny fraction of all attempted attacks.
As we show in this paper, the process of filtering reduces the predictability of cyber-attacks.
arXiv Detail & Related papers (2020-04-06T22:57:23Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.