Perception Graph for Cognitive Attack Reasoning in Augmented Reality
- URL: http://arxiv.org/abs/2509.05324v1
- Date: Sat, 30 Aug 2025 20:48:32 GMT
- Title: Perception Graph for Cognitive Attack Reasoning in Augmented Reality
- Authors: Rongqian Chen, Shu Hong, Rifatul Islam, Mahdi Imani, G. Gary Tan, Tian Lan,
- Abstract summary: We introduce a novel model designed to reason about human perception within augmented reality systems.<n>Our model operates by first mimicking the human process of interpreting key information from an MR environment.<n>We demonstrate how the model can compute a quantitative score that reflects the level of perception distortion.
- Score: 12.005631730339708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Augmented reality (AR) systems are increasingly deployed in tactical environments, but their reliance on seamless human-computer interaction makes them vulnerable to cognitive attacks that manipulate a user's perception and severely compromise user decision-making. To address this challenge, we introduce the Perception Graph, a novel model designed to reason about human perception within these systems. Our model operates by first mimicking the human process of interpreting key information from an MR environment and then representing the outcomes using a semantically meaningful structure. We demonstrate how the model can compute a quantitative score that reflects the level of perception distortion, providing a robust and measurable method for detecting and analyzing the effects of such cognitive attacks.
Related papers
- Visual Categorization Across Minds and Models: Cognitive Analysis of Human Labeling and Neuro-Symbolic Integration [0.0]
This paper examines image labeling performance across human participants and deep neural networks.<n>We contrast human strategies such as analogical reasoning, shape-based recognition, and confidence modulation with AI's feature-based processing.<n>Our findings highlight key parallels and divergences between biological and artificial systems in representation, inference, and confidence calibration.
arXiv Detail & Related papers (2025-12-10T05:58:12Z) - A Descriptive Model for Modelling Attacker Decision-Making in Cyber-Deception [0.0]
This paper presents a descriptive model that incorporates the psychological and strategic elements shaping this decision.<n>The framework provides a structured method for analysing engagement decisions in cyber-deception scenarios.
arXiv Detail & Related papers (2025-12-03T10:23:33Z) - Cognitive Inception: Agentic Reasoning against Visual Deceptions by Injecting Skepticism [81.39177645864757]
We propose textbfInception, a fully reasoning-based agentic reasoning framework to conduct authenticity verification by injecting skepticism.<n>To the best of our knowledge, this is the first fully reasoning-based framework against AIGC visual deceptions.
arXiv Detail & Related papers (2025-11-21T05:13:30Z) - A Neurosymbolic Framework for Interpretable Cognitive Attack Detection in Augmented Reality [30.59764541723801]
CADAR is a novel neurosymbolic approach for cognitive attack detection in Augmented Reality.<n>It fuses multimodal vision-language inputs using neural VLMs to obtain a symbolic perception-graph representation.<n>Experiments on an extended AR cognitive attack dataset show accuracy improvements of up to 10.7% over strong baselines.
arXiv Detail & Related papers (2025-08-07T17:59:49Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of
Perturbation and AI Techniques [1.0718756132502771]
adversarial examples are subtle perturbations artfully injected into clean images or videos.
Deepfakes have emerged as a potent tool to manipulate public opinion and tarnish the reputations of public figures.
This article delves into the multifaceted world of adversarial examples, elucidating the underlying principles behind their capacity to deceive deep learning algorithms.
arXiv Detail & Related papers (2023-02-22T23:48:19Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Affect-Aware Deep Belief Network Representations for Multimodal
Unsupervised Deception Detection [3.04585143845864]
unsupervised approach for detecting real-world, high-stakes deception in videos without requiring labels.
This paper presents our novel approach for affect-aware unsupervised Deep Belief Networks (DBN)
In addition to using facial affect as a feature on which DBN models are trained, we also introduce a DBN training procedure that uses facial affect as an aligner of audio-visual representations.
arXiv Detail & Related papers (2021-08-17T22:07:26Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - On the Sensory Commutativity of Action Sequences for Embodied Agents [2.320417845168326]
We study perception for embodied agents under the mathematical formalism of group theory.
We introduce the Sensory Commutativity Probability criterion which measures how much an agent's degree of freedom affects the environment.
We empirically illustrate how SCP and the commutative properties of action sequences can be used to learn about objects in the environment.
arXiv Detail & Related papers (2020-02-13T16:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.