A Game-Theoretic Approach for AI-based Botnet Attack Defence
- URL: http://arxiv.org/abs/2112.02223v1
- Date: Sat, 4 Dec 2021 02:53:40 GMT
- Title: A Game-Theoretic Approach for AI-based Botnet Attack Defence
- Authors: Hooman Alavizadeh and Julian Jang-Jaccard and Tansu Alpcan and Seyit
A. Camtepe
- Abstract summary: New generation of botnets leverage Artificial Intelligent (AI) techniques to conceal the identity of botmasters and the attack intention to avoid detection.
There has not been an existing assessment tool capable of evaluating the effectiveness of existing defense strategies against this kind of AI-based botnet attack.
We propose a sequential game theory model that is capable to analyse the details of the potential strategies botnet attackers and defenders could use to reach Nash Equilibrium (NE)
- Score: 5.020067709306813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The new generation of botnets leverages Artificial Intelligent (AI)
techniques to conceal the identity of botmasters and the attack intention to
avoid detection. Unfortunately, there has not been an existing assessment tool
capable of evaluating the effectiveness of existing defense strategies against
this kind of AI-based botnet attack. In this paper, we propose a sequential
game theory model that is capable to analyse the details of the potential
strategies botnet attackers and defenders could use to reach Nash Equilibrium
(NE). The utility function is computed under the assumption when the attacker
launches the maximum number of DDoS attacks with the minimum attack cost while
the defender utilises the maximum number of defense strategies with the minimum
defense cost. We conduct a numerical analysis based on a various number of
defense strategies involved on different (simulated) cloud-band sizes in
relation to different attack success rate values. Our experimental results
confirm that the success of defense highly depends on the number of defense
strategies used according to careful evaluation of attack rates.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Improving behavior based authentication against adversarial attack using XAI [3.340314613771868]
We propose an eXplainable AI (XAI) based defense strategy against adversarial attacks in such scenarios.
A feature selector, trained with our method, can be used as a filter in front of the original authenticator.
We demonstrate that our XAI based defense strategy is effective against adversarial attacks and outperforms other defense strategies.
arXiv Detail & Related papers (2024-02-26T09:29:05Z) - Randomness in ML Defenses Helps Persistent Attackers and Hinders
Evaluators [49.52538232104449]
It is becoming increasingly imperative to design robust ML defenses.
Recent work has found that many defenses that initially resist state-of-the-art attacks can be broken by an adaptive adversary.
We take steps to simplify the design of defenses and argue that white-box defenses should eschew randomness when possible.
arXiv Detail & Related papers (2023-02-27T01:33:31Z) - Learning Near-Optimal Intrusion Responses Against Dynamic Attackers [0.0]
We study automated intrusion response and formulate the interaction between an attacker and a defender as an optimal stopping game.
To obtain near-optimal defender strategies, we develop a fictitious self-play algorithm that learns Nashlibria through approximation.
We argue that this approach can produce effective defender strategies for a practical IT infrastructure.
arXiv Detail & Related papers (2023-01-11T16:36:24Z) - Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack [96.50202709922698]
A practical evaluation method should be convenient (i.e., parameter-free), efficient (i.e., fewer iterations) and reliable.
We propose a parameter-free Adaptive Auto Attack (A$3$) evaluation method which addresses the efficiency and reliability in a test-time-training fashion.
arXiv Detail & Related papers (2022-03-10T04:53:54Z) - The Art of Manipulation: Threat of Multi-Step Manipulative Attacks in
Security Games [8.87104231451079]
This paper studies the problem of multi-step manipulative attacks in Stackelberg security games.
A clever attacker attempts to orchestrate its attacks over multiple time steps to mislead the defender's learning of the attacker's behavior.
This attack manipulation eventually influences the defender's patrol strategy towards the attacker's benefit.
arXiv Detail & Related papers (2022-02-27T18:58:15Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Harnessing adversarial examples with a surprisingly simple defense [47.64219291655723]
I introduce a very simple method to defend against adversarial examples.
The basic idea is to raise the slope of the ReLU function at the test time.
Experiments over MNIST and CIFAR-10 datasets demonstrate the effectiveness of the proposed defense.
arXiv Detail & Related papers (2020-04-26T03:09:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.