Quantifying the Engagement Effectiveness of Cyber Cognitive Attacks: A Behavioral Metric for Disinformation Campaigns
- URL: http://arxiv.org/abs/2510.15805v1
- Date: Fri, 17 Oct 2025 16:34:46 GMT
- Title: Quantifying the Engagement Effectiveness of Cyber Cognitive Attacks: A Behavioral Metric for Disinformation Campaigns
- Authors: Bonnie Rushing, Shouhuai Xu,
- Abstract summary: This paper presents a novel framework for measuring the engagement effectiveness of cognitive attacks by introducing a weighted interaction metric.<n>Applying this model to real-world disinformation campaigns across social media platforms, we demonstrate how the metric captures not just reach but the behavioral depth of user engagement.
- Score: 3.7735437117876267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As disinformation-driven cognitive attacks become increasingly sophisticated, the ability to quantify their impact is essential for advancing cybersecurity defense strategies. This paper presents a novel framework for measuring the engagement effectiveness of cognitive attacks by introducing a weighted interaction metric that accounts for both the type and volume of user engagement relative to the number of attacker-generated transmissions. Applying this model to real-world disinformation campaigns across social media platforms, we demonstrate how the metric captures not just reach but the behavioral depth of user engagement. Our findings provide new insights into the behavioral dynamics of cognitive warfare and offer actionable tools for researchers and practitioners seeking to assess and counter the spread of malicious influence online.
Related papers
- Guarding Against Malicious Biased Threats (GAMBiT): Experimental Design of Cognitive Sensors and Triggers with Behavioral Impact Analysis [17.809804870177192]
GAMBiT embeds insights from cognitive science into cyber environments through cognitive triggers.<n>GAMBiT establishes a new paradigm in which the attacker's mind becomes part of the battlefield.
arXiv Detail & Related papers (2025-11-27T02:18:03Z) - Quantifying Loss Aversion in Cyber Adversaries via LLM Analysis [2.798191832420146]
IARPA's ReSCIND program seeks to infer, defend against, and exploit attacker cognitive traits.<n>In this paper, we present a novel methodology that leverages large language models (LLMs) to extract quantifiable insights into the cognitive bias of loss aversion from hacker behavior.
arXiv Detail & Related papers (2025-08-18T05:51:30Z) - Comprehensive Survey on Adversarial Examples in Cybersecurity: Impacts, Challenges, and Mitigation Strategies [4.606106768645647]
Ad adversarial examples (AE) pose a critical challenge to the robustness and reliability of deep learning-based systems.<n>This paper provides a comprehensive review of the impact of AE attacks on key cybersecurity applications.<n>We explore recent advancements in defense mechanisms, including gradient masking, adversarial training, and detection techniques.
arXiv Detail & Related papers (2024-12-16T01:54:07Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook [2.1771693754641013]
We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
arXiv Detail & Related papers (2023-08-11T15:02:19Z) - SoK: Adversarial Evasion Attacks Practicality in NIDS Domain and the Impact of Dynamic Learning [0.6588840794922407]
adversarial attacks aim to trick Machine Learning models into producing faulty predictions.<n>This paper presents several key contributions on practicality of adversarial attacks against ML-based NIDS.<n>Our experiments indicate that continuous re-training, even without adversarial training, can reduce the effectiveness of adversarial attacks.
arXiv Detail & Related papers (2023-06-08T18:32:08Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - How Deep Learning Sees the World: A Survey on Adversarial Attacks &
Defenses [0.0]
This paper compiles the most recent adversarial attacks, grouped by the attacker capacity, and modern defenses clustered by protection strategies.
We also present the new advances regarding Vision Transformers, summarize the datasets and metrics used in the context of adversarial settings, and compare the state-of-the-art results under different attacks, finishing with the identification of open issues.
arXiv Detail & Related papers (2023-05-18T10:33:28Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Where Did You Learn That From? Surprising Effectiveness of Membership
Inference Attacks Against Temporally Correlated Data in Deep Reinforcement
Learning [114.9857000195174]
A major challenge to widespread industrial adoption of deep reinforcement learning is the potential vulnerability to privacy breaches.
We propose an adversarial attack framework tailored for testing the vulnerability of deep reinforcement learning algorithms to membership inference attacks.
arXiv Detail & Related papers (2021-09-08T23:44:57Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.