Game of Travesty: Decoy-based Psychological Cyber Deception for Proactive Human Agents
- URL: http://arxiv.org/abs/2309.13403v1
- Date: Sat, 23 Sep 2023 15:27:26 GMT
- Title: Game of Travesty: Decoy-based Psychological Cyber Deception for Proactive Human Agents
- Authors: Yinan Hu, Quanyan Zhu,
- Abstract summary: In this work, we adopt a signaling game framework between a defender and a human agent to develop a cyber defensive deception protocol.
The proposed framework leads to fundamental theories in designing more effective signaling schemes.
- Score: 13.47548023934913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The concept of cyber deception has been receiving emerging attention. The development of cyber defensive deception techniques requires interdisciplinary work, among which cognitive science plays an important role. In this work, we adopt a signaling game framework between a defender and a human agent to develop a cyber defensive deception protocol that takes advantage of the cognitive biases of human decision-making using quantum decision theory to combat insider attacks (IA). The defender deceives an inside human attacker by luring him to access decoy sensors via generators producing perceptions of classical signals to manipulate the human attacker's psychological state of mind. Our results reveal that even without changing the classical traffic data, strategically designed generators can result in a worse performance for defending against insider attackers in identifying decoys than the ones in the deceptive scheme without generators, which generate random information based on input signals. The proposed framework leads to fundamental theories in designing more effective signaling schemes.
Related papers
- WiP: Deception-in-Depth Using Multiple Layers of Deception [0.24466725954625887]
We draw ideas from military deception, network orchestration, software deception, file deception, fake honeypots, and moving-target defenses.
We hope to show that deploying a broad range of deception techniques can be more effective in protecting systems than deploying single techniques.
arXiv Detail & Related papers (2024-12-21T01:34:38Z) - Towards in-situ Psychological Profiling of Cybercriminals Using Dynamically Generated Deception Environments [0.0]
Cybercrime is estimated to cost the global economy almost $10 trillion annually.
Traditional perimeter security approach to cyber defence has so far proved inadequate to combat the growing threat of cybercrime.
Deceptive techniques aim to mislead attackers, diverting them from critical assets whilst simultaneously gathering cyber threat intelligence on the threat actor.
This article presents a proof-of-concept system that has been developed to capture the profile of an attacker in-situ, during a simulated cyber-attack in real time.
arXiv Detail & Related papers (2024-05-19T09:48:59Z) - Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning
in Cybersecurity Games [1.14219428942199]
We present a novel model of human decision-making inspired by the cognitive faculties of Instance-Based Learning Theory, Theory of Mind, and Transfer of Learning.
This model functions by learning from both roles in a security scenario: defender and attacker, and by making predictions of the opponent's beliefs, intentions, and actions.
Results from simulation experiments demonstrate the potential usefulness of cognitively inspired models of agents trained in attack and defense roles.
arXiv Detail & Related papers (2023-06-03T17:51:04Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Poisoning Attacks on Cyber Attack Detectors for Industrial Control
Systems [34.86059492072526]
We are first to demonstrate such poisoning attacks on ICS online neural network detectors.
We propose two distinct attack algorithms, namely, back-gradient based poisoning, and demonstrate their effectiveness on both synthetic and real-world data.
arXiv Detail & Related papers (2020-12-23T14:11:26Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.