Evaluating Attacker Risk Behavior in an Internet of Things Ecosystem
- URL: http://arxiv.org/abs/2109.11592v1
- Date: Thu, 23 Sep 2021 18:53:41 GMT
- Title: Evaluating Attacker Risk Behavior in an Internet of Things Ecosystem
- Authors: Erick Galinkin and John Carter and Spiros Mancoridis
- Abstract summary: In cybersecurity, attackers range from brash, unsophisticated script kiddies to stealthy, patient advanced persistent threats.
This work explores how an attacker's risk seeking or risk averse behavior affects their operations against detection-optimizing defenders.
- Score: 2.958532752589616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In cybersecurity, attackers range from brash, unsophisticated script kiddies
and cybercriminals to stealthy, patient advanced persistent threats. When
modeling these attackers, we can observe that they demonstrate different
risk-seeking and risk-averse behaviors. This work explores how an attacker's
risk seeking or risk averse behavior affects their operations against
detection-optimizing defenders in an Internet of Things ecosystem. Using an
evaluation framework which uses real, parametrizable malware, we develop a game
that is played by a defender against attackers with a suite of malware that is
parameterized to be more aggressive and more stealthy. These results are
evaluated under a framework of exponential utility according to their
willingness to accept risk. We find that against a defender who must choose a
single strategy up front, risk-seeking attackers gain more actual utility than
risk-averse attackers, particularly in cases where the defender is better
equipped than the two attackers anticipate. Additionally, we empirically
confirm that high-risk, high-reward scenarios are more beneficial to
risk-seeking attackers like cybercriminals, while low-risk, low-reward
scenarios are more beneficial to risk-averse attackers like advanced persistent
threats.
Related papers
- SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Improving behavior based authentication against adversarial attack using XAI [3.340314613771868]
We propose an eXplainable AI (XAI) based defense strategy against adversarial attacks in such scenarios.
A feature selector, trained with our method, can be used as a filter in front of the original authenticator.
We demonstrate that our XAI based defense strategy is effective against adversarial attacks and outperforms other defense strategies.
arXiv Detail & Related papers (2024-02-26T09:29:05Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - Are Adversarial Examples Created Equal? A Learnable Weighted Minimax
Risk for Robustness under Non-uniform Attacks [70.11599738647963]
Adversarial Training is one of the few defenses that withstand strong attacks.
Traditional defense mechanisms assume a uniform attack over the examples according to the underlying data distribution.
We present a weighted minimax risk optimization that defends against non-uniform attacks.
arXiv Detail & Related papers (2020-10-24T21:20:35Z) - Revisiting Adversarially Learned Injection Attacks Against Recommender
Systems [6.920518936054493]
This paper revisits the adversarially-learned injection attack problem.
We show that the exact solution for generating fake users as an optimization problem could lead to a much larger impact.
arXiv Detail & Related papers (2020-08-11T17:30:02Z) - Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
Detection [8.551227913472632]
We propose a new attack approach, named mixture of attacks, to perturb a malware example without ruining its malicious functionality.
This naturally leads to a new instantiation of adversarial training, which is further geared to enhancing the ensemble of deep neural networks.
We evaluate defenses using Android malware detectors against 26 different attacks upon two practical datasets.
arXiv Detail & Related papers (2020-06-30T05:56:33Z) - Arms Race in Adversarial Malware Detection: A Survey [33.8941961394801]
Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques.
ML is vulnerable to attacks known as adversarial examples.
Knowing the defender's feature set is critical to the success of transfer attacks.
The effectiveness of adversarial training depends on the defender's capability in identifying the most powerful attack.
arXiv Detail & Related papers (2020-05-24T07:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.