RADAMS: Resilient and Adaptive Alert and Attention Management Strategy
against Informational Denial-of-Service (IDoS) Attacks
- URL: http://arxiv.org/abs/2111.03463v1
- Date: Mon, 1 Nov 2021 19:58:29 GMT
- Title: RADAMS: Resilient and Adaptive Alert and Attention Management Strategy
against Informational Denial-of-Service (IDoS) Attacks
- Authors: Linan Huang and Quanyan Zhu
- Abstract summary: We study IDoS attacks that generate a large volume of feint attacks to overload human operators and hide real attacks among feints.
We develop a Resilient and Adaptive Data-driven alert and Attention Management Strategy (RADAMS)
RADAMS uses reinforcement learning to achieve a customized and transferable design for various human operators and evolving IDoS attacks.
- Score: 28.570086492742046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attacks exploiting human attentional vulnerability have posed severe threats
to cybersecurity. In this work, we identify and formally define a new type of
proactive attentional attacks called Informational Denial-of-Service (IDoS)
attacks that generate a large volume of feint attacks to overload human
operators and hide real attacks among feints. We incorporate human factors
(e.g., levels of expertise, stress, and efficiency) and empirical results
(e.g., the Yerkes-Dodson law and the sunk cost fallacy) to model the operators'
attention dynamics and their decision-making processes along with the real-time
alert monitoring and inspection.
To assist human operators in timely and accurately dismissing the feints and
escalating the real attacks, we develop a Resilient and Adaptive Data-driven
alert and Attention Management Strategy (RADAMS) that de-emphasizes alerts
selectively based on the alerts' observable features. RADAMS uses reinforcement
learning to achieve a customized and transferable design for various human
operators and evolving IDoS attacks.
The integrated modeling and theoretical analysis lead to the Product
Principle of Attention (PPoA), fundamental limits, and the tradeoff among
crucial human and economic factors. Experimental results corroborate that the
proposed strategy outperforms the default strategy and can reduce the IDoS risk
by as much as 20%. Besides, the strategy is resilient to large variations of
costs, attack frequencies, and human attention capacities. We have recognized
interesting phenomena such as attentional risk equivalency, attacker's dilemma,
and the half-truth optimal attack strategy.
Related papers
- Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks [13.726534285661717]
This paper introduces the Embodied Laser Attack (ELA), a novel framework that dynamically tailors non-contact laser attacks.
For the perception module, ELA has innovatively developed a local perspective transformation network, based on the intrinsic prior knowledge of traffic scenes.
For the decision and control module, ELA trains an attack agent with data-driven reinforcement learning instead of adopting time-consuming algorithms.
arXiv Detail & Related papers (2023-12-15T06:16:17Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook [2.1771693754641013]
We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
arXiv Detail & Related papers (2023-08-11T15:02:19Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual
Patterns [18.694795507945603]
Recent studies demonstrated the vulnerability of control policies learned through deep reinforcement learning against adversarial attacks.
This paper investigates the feasibility of targeted attacks through visually learned patterns placed on physical object in the environment.
arXiv Detail & Related papers (2021-09-16T04:59:06Z) - Combating Informational Denial-of-Service (IDoS) Attacks: Modeling and
Mitigation of Attentional Human Vulnerability [28.570086492742046]
IDoS attacks deplete the cognition resources of human operators to prevent humans from identifying the real attacks hidden among feints.
This work aims to formally define IDoS attacks, quantify their consequences, and develop human-assistive security technologies to mitigate the severity level and risks of IDoS attacks.
arXiv Detail & Related papers (2021-08-04T05:09:32Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attack Attribution: Discovering Attributable Signals in
Adversarial ML Attacks [0.7883722807601676]
Even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible to adversarial inputs.
Can perturbed inputs be attributed to the methods used to generate the attack?
We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks.
arXiv Detail & Related papers (2021-01-08T08:16:41Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.