Categorical Framework for Quantum-Resistant Zero-Trust AI Security
- URL: http://arxiv.org/abs/2511.21768v1
- Date: Tue, 25 Nov 2025 17:17:24 GMT
- Title: Categorical Framework for Quantum-Resistant Zero-Trust AI Security
- Authors: I. Cherkaoui, C. Clarke, J. Horgan, I. Dey,
- Abstract summary: We present a novel integration of post-quantum cryptography (PQC) and zero trust architecture (AZT) to secure AI model.<n>Our framework uniquely models cryptographic access as morphisms and trust policies as functors.<n>We demonstrate its efficacy through a concrete ESP32-based implementation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid deployment of AI models necessitates robust, quantum-resistant security, particularly against adversarial threats. Here, we present a novel integration of post-quantum cryptography (PQC) and zero trust architecture (ZTA), formally grounded in category theory, to secure AI model access. Our framework uniquely models cryptographic workflows as morphisms and trust policies as functors, enabling fine-grained, adaptive trust and micro-segmentation for lattice-based PQC primitives. This approach offers enhanced protection against adversarial AI threats. We demonstrate its efficacy through a concrete ESP32-based implementation, validating a crypto-agile transition with quantifiable performance and security improvements, underpinned by categorical proofs for AI security. The implementation achieves significant memory efficiency on ESP32, with the agent utilizing 91.86% and the broker 97.88% of free heap after cryptographic operations, and successfully rejects 100% of unauthorized access attempts with sub-millisecond average latency.
Related papers
- SecureCAI: Injection-Resilient LLM Assistants for Cybersecurity Operations [0.0]
This paper introduces SecureCAI, a novel defense framework extending Constitutional AI principles with security-aware guardrails.<n>SecureCAI reduces attack success rates by 94.7% compared to baseline models.
arXiv Detail & Related papers (2026-01-12T18:59:45Z) - Byzantine-Robust Federated Learning Framework with Post-Quantum Secure Aggregation for Real-Time Threat Intelligence Sharing in Critical IoT Infrastructure [0.0]
Traditional federated learning approaches for IoT security suffer from two critical vulnerabilities: susceptibility to Byzantine attacks and inadequacy against future quantum computing threats.<n>This paper presents a novel Byzantine-robust federated learning framework integrated with post-quantum secure aggregation.<n>The proposed framework combines a adaptive weighted aggregation mechanism with lattice-based cryptographic protocols to simultaneously defend against model poisoning attacks and quantum adversaries.
arXiv Detail & Related papers (2026-01-03T03:13:46Z) - Optimistic TEE-Rollups: A Hybrid Architecture for Scalable and Verifiable Generative AI Inference on Blockchain [4.254924788681319]
We introduce Optimistic TEE-Rollups (OTR), a hybrid verification protocol that harmonizes constraints.<n>OTR achieves 99% of the throughput of centralized baselines with a marginal cost overhead of $0.07 per query.
arXiv Detail & Related papers (2025-12-23T09:16:41Z) - A Call to Action for a Secure-by-Design Generative AI Paradigm [0.0]
Large language models (LLMs) are vulnerable to prompt injection and other adversarial attacks.<n>This paper introduces PromptShield, a framework that ensures deterministic and secure prompt interactions.<n>Our results demonstrate a significant improvement in model security and performance, achieving precision, recall, and F1 scores of approximately 94%.
arXiv Detail & Related papers (2025-10-01T03:05:07Z) - Towards Secure and Explainable Smart Contract Generation with Security-Aware Group Relative Policy Optimization [18.013438474903314]
We propose SmartCoder-R1, a framework for secure and explainable smart contract generation.<n>We train the model to emulate human security analysis.<n>SmartCoder-R1 establishes a new state of the art, achieving top performance across five key metrics.
arXiv Detail & Related papers (2025-09-12T03:14:50Z) - The Aegis Protocol: A Foundational Security Framework for Autonomous AI Agents [0.0]
The proliferation of autonomous AI agents marks a paradigm shift toward complex, emergent multi-agent systems.<n>This paper introduces the Aegis Protocol, a layered security framework designed to provide strong security guarantees for open agentic ecosystems.
arXiv Detail & Related papers (2025-08-22T06:18:57Z) - Secure mmWave Beamforming with Proactive-ISAC Defense Against Beam-Stealing Attacks [6.81194385663614]
Millimeter-wave (mmWave) communication systems face increasing susceptibility to advanced beam-stealing attacks.<n>This paper introduces a novel framework employing an advanced Deep Reinforcement Learning (DRL) agent for proactive and adaptive defense.
arXiv Detail & Related papers (2025-08-04T19:30:09Z) - Secure Tug-of-War (SecTOW): Iterative Defense-Attack Training with Reinforcement Learning for Multimodal Model Security [63.41350337821108]
We propose Secure Tug-of-War (SecTOW) to enhance the security of multimodal large language models (MLLMs)<n>SecTOW consists of two modules: a defender and an auxiliary attacker, both trained iteratively using reinforcement learning (GRPO)<n>We show that SecTOW significantly improves security while preserving general performance.
arXiv Detail & Related papers (2025-07-29T17:39:48Z) - Security Challenges in AI Agent Deployment: Insights from a Large Scale Public Competition [101.86739402748995]
We run the largest public red-teaming competition to date, targeting 22 frontier AI agents across 44 realistic deployment scenarios.<n>We build the Agent Red Teaming benchmark and evaluate it across 19 state-of-the-art models.<n>Our findings highlight critical and persistent vulnerabilities in today's AI agents.
arXiv Detail & Related papers (2025-07-28T05:13:04Z) - AegisLLM: Scaling Agentic Systems for Self-Reflective Defense in LLM Security [74.22452069013289]
AegisLLM is a cooperative multi-agent defense against adversarial attacks and information leakage.<n>We show that scaling agentic reasoning system at test-time substantially enhances robustness without compromising model utility.<n> Comprehensive evaluations across key threat scenarios, including unlearning and jailbreaking, demonstrate the effectiveness of AegisLLM.
arXiv Detail & Related papers (2025-04-29T17:36:05Z) - Secured Communication Schemes for UAVs in 5G: CRYSTALS-Kyber and IDS [16.52849506266782]
This paper introduces a secure communication architecture for Unmanned Aerial Vehicles (UAVs) and ground stations in 5G networks.<n>The proposed solution integrates the Advanced Encryption Standard (AES) with Elliptic Curve Cryptography (ECC) and CRYSTALS-Kyber for key encapsulation.<n>The architecture is based on a server-client model, with UAVs functioning as clients and the ground station acting as the server.
arXiv Detail & Related papers (2025-01-31T15:00:27Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - MF-CLIP: Leveraging CLIP as Surrogate Models for No-box Adversarial Attacks [65.86360607693457]
No-box attacks, where adversaries have no prior knowledge, remain relatively underexplored despite its practical relevance.<n>This work presents a systematic investigation into leveraging large-scale Vision-Language Models (VLMs) as surrogate models for executing no-box attacks.<n>Our theoretical and empirical analyses reveal a key limitation in the execution of no-box attacks stemming from insufficient discriminative capabilities for direct application of vanilla CLIP as a surrogate model.<n>We propose MF-CLIP: a novel framework that enhances CLIP's effectiveness as a surrogate model through margin-aware feature space optimization.
arXiv Detail & Related papers (2023-07-13T08:10:48Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.