Towards Proactive Defense Against Cyber Cognitive Attacks
- URL: http://arxiv.org/abs/2510.15801v1
- Date: Fri, 17 Oct 2025 16:25:47 GMT
- Title: Towards Proactive Defense Against Cyber Cognitive Attacks
- Authors: Bonnie Rushing, Mac-Rufus Umeokolo, Shouhuai Xu,
- Abstract summary: Cyber cognitive attacks leverage disruptive innovations (DIs) to exploit psychological biases and manipulate decision-making processes.<n>New technologies, such as AI-driven disinformation and synthetic media, have accelerated the scale and sophistication of these threats.<n>We introduce a novel predictive methodology for forecasting the emergence of DIs and their malicious uses in cognitive attacks.
- Score: 3.357544650969485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cyber cognitive attacks leverage disruptive innovations (DIs) to exploit psychological biases and manipulate decision-making processes. Emerging technologies, such as AI-driven disinformation and synthetic media, have accelerated the scale and sophistication of these threats. Prior studies primarily categorize current cognitive attack tactics, lacking predictive mechanisms to anticipate future DIs and their malicious use in cognitive attacks. This paper addresses these gaps by introducing a novel predictive methodology for forecasting the emergence of DIs and their malicious uses in cognitive attacks. We identify trends in adversarial tactics and propose proactive defense strategies.
Related papers
- Techniques of Modern Attacks [51.56484100374058]
Advanced Persistent Threats (APTs) represent a complex method of attack aimed at specific targets.<n>I will investigate both the attack life cycle and cutting-edge detection and defense strategies proposed in recent academic research.<n>I aim to highlight the strengths and limitations of each approach and propose more adaptive APT mitigation strategies.
arXiv Detail & Related papers (2026-01-19T22:15:25Z) - Guarding Against Malicious Biased Threats (GAMBiT): Experimental Design of Cognitive Sensors and Triggers with Behavioral Impact Analysis [17.809804870177192]
GAMBiT embeds insights from cognitive science into cyber environments through cognitive triggers.<n>GAMBiT establishes a new paradigm in which the attacker's mind becomes part of the battlefield.
arXiv Detail & Related papers (2025-11-27T02:18:03Z) - Quantifying Loss Aversion in Cyber Adversaries via LLM Analysis [2.798191832420146]
IARPA's ReSCIND program seeks to infer, defend against, and exploit attacker cognitive traits.<n>In this paper, we present a novel methodology that leverages large language models (LLMs) to extract quantifiable insights into the cognitive bias of loss aversion from hacker behavior.
arXiv Detail & Related papers (2025-08-18T05:51:30Z) - A Case Study on the Use of Representativeness Bias as a Defense Against Adversarial Cyber Threats [1.74585489563148]
This paper takes a first step towards psychology-informed, active defense strategies.<n>Using capture-the-flag events, we create realistic challenges that tap into a particular cognitive bias: representativeness.<n>This study finds that this bias can be triggered to thwart hacking attempts and divert hackers into non-vulnerable attack paths.
arXiv Detail & Related papers (2025-04-28T20:30:28Z) - Modern DDoS Threats and Countermeasures: Insights into Emerging Attacks and Detection Strategies [49.57278643040602]
Distributed Denial of Service (DDoS) attacks persist as significant threats to online services and infrastructure.<n>This paper offers a comprehensive survey of emerging DDoS attacks and detection strategies over the past decade.
arXiv Detail & Related papers (2025-02-27T11:22:25Z) - Intelligent Attacks on Cyber-Physical Systems and Critical Infrastructures [0.0]
This chapter provides an overview of the evolving landscape of attacks in cyber-physical systems and critical infrastructures.<n>It highlights the possible use of Artificial Intelligence (AI) algorithms to develop intelligent cyberattacks.
arXiv Detail & Related papers (2025-01-22T09:54:58Z) - A Comprehensive Review of Adversarial Attacks on Machine Learning [0.5104264623877593]
This research provides a comprehensive overview of adversarial attacks on AI and ML models, exploring various attack types, techniques, and their potential harms.<n>To gain practical insights, we employ the Adversarial Robustness Toolbox (ART) library to simulate these attacks on real-world use cases, such as self-driving cars.
arXiv Detail & Related papers (2024-12-16T02:27:54Z) - The Shadow of Fraud: The Emerging Danger of AI-powered Social Engineering and its Possible Cure [30.431292911543103]
Social engineering (SE) attacks remain a significant threat to both individuals and organizations.
The advancement of Artificial Intelligence (AI) has potentially intensified these threats by enabling more personalized and convincing attacks.
This survey paper categorizes SE attack mechanisms, analyzes their evolution, and explores methods for measuring these threats.
arXiv Detail & Related papers (2024-07-22T17:37:31Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.