Combating Informational Denial-of-Service (IDoS) Attacks: Modeling and
Mitigation of Attentional Human Vulnerability
- URL: http://arxiv.org/abs/2108.08255v1
- Date: Wed, 4 Aug 2021 05:09:32 GMT
- Title: Combating Informational Denial-of-Service (IDoS) Attacks: Modeling and
Mitigation of Attentional Human Vulnerability
- Authors: Linan Huang and Quanyan Zhu
- Abstract summary: IDoS attacks deplete the cognition resources of human operators to prevent humans from identifying the real attacks hidden among feints.
This work aims to formally define IDoS attacks, quantify their consequences, and develop human-assistive security technologies to mitigate the severity level and risks of IDoS attacks.
- Score: 28.570086492742046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes a new class of proactive attacks called the Informational
Denial-of-Service (IDoS) attacks that exploit the attentional human
vulnerability. By generating a large volume of feints, IDoS attacks deplete the
cognition resources of human operators to prevent humans from identifying the
real attacks hidden among feints. This work aims to formally define IDoS
attacks, quantify their consequences, and develop human-assistive security
technologies to mitigate the severity level and risks of IDoS attacks. To this
end, we model the feint and real attacks' sequential arrivals with category
labels as a semi-Markov process. The assistive technology strategically manages
human attention by highlighting selective alerts periodically to prevent the
distraction of other alerts. A data-driven approach is applied to evaluate
human performance under different Attention Management (AM) strategies. Under a
representative special case, we establish the computational equivalency between
two dynamic programming representations to simplify the theoretical computation
and the online learning. A case study corroborates the effectiveness of the
learning framework. The numerical results illustrate how AM strategies can
alleviate the severity level and the risk of IDoS attacks. Furthermore, we
characterize the fundamental limits of the minimum severity level under all AM
strategies and the maximum length of the inspection period to reduce the IDoS
risks.
Related papers
- SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - A Dual-Tier Adaptive One-Class Classification IDS for Emerging Cyberthreats [3.560574387648533]
We propose a one-class classification-driven IDS system structured on two tiers.
The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown.
This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks.
arXiv Detail & Related papers (2024-03-17T12:26:30Z) - Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook [2.1771693754641013]
We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
arXiv Detail & Related papers (2023-08-11T15:02:19Z) - Adversarial training for tabular data with attack propagation [1.9826772682131455]
We propose a new form of adversarial training where attacks are propagated between the two spaces in the training loop.
We show that our method can prevent about 30% performance drops under moderate attacks and is essential under very aggressive attacks.
arXiv Detail & Related papers (2023-07-28T17:12:46Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - RADAMS: Resilient and Adaptive Alert and Attention Management Strategy
against Informational Denial-of-Service (IDoS) Attacks [28.570086492742046]
We study IDoS attacks that generate a large volume of feint attacks to overload human operators and hide real attacks among feints.
We develop a Resilient and Adaptive Data-driven alert and Attention Management Strategy (RADAMS)
RADAMS uses reinforcement learning to achieve a customized and transferable design for various human operators and evolving IDoS attacks.
arXiv Detail & Related papers (2021-11-01T19:58:29Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Adversarial Attack Attribution: Discovering Attributable Signals in
Adversarial ML Attacks [0.7883722807601676]
Even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible to adversarial inputs.
Can perturbed inputs be attributed to the methods used to generate the attack?
We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks.
arXiv Detail & Related papers (2021-01-08T08:16:41Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.