Reasoning Under Threat: Symbolic and Neural Techniques for Cybersecurity Verification
- URL: http://arxiv.org/abs/2503.22755v1
- Date: Thu, 27 Mar 2025 11:41:53 GMT
- Title: Reasoning Under Threat: Symbolic and Neural Techniques for Cybersecurity Verification
- Authors: Sarah Veronica,
- Abstract summary: This survey presents a comprehensive overview of the role of automated reasoning in cybersecurity.<n>We examine SOTA tools and frameworks, explore integrations with AI for neural-symbolic reasoning, and highlight critical research gaps.<n>The paper concludes with a set of well-grounded future research directions, aiming to foster the development of secure systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cybersecurity demands rigorous and scalable techniques to ensure system correctness, robustness, and resilience against evolving threats. Automated reasoning, encompassing formal logic, theorem proving, model checking, and symbolic analysis, provides a foundational framework for verifying security properties across diverse domains such as access control, protocol design, vulnerability detection, and adversarial modeling. This survey presents a comprehensive overview of the role of automated reasoning in cybersecurity, analyzing how logical systems, including temporal, deontic, and epistemic logics are employed to formalize and verify security guarantees. We examine SOTA tools and frameworks, explore integrations with AI for neural-symbolic reasoning, and highlight critical research gaps, particularly in scalability, compositionality, and multi-layered security modeling. The paper concludes with a set of well-grounded future research directions, aiming to foster the development of secure systems through formal, automated, and explainable reasoning techniques.
Related papers
- Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.
We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.
As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments [55.60375624503877]
Model Extraction Attacks (MEAs) threaten modern machine learning systems by enabling adversaries to steal models, exposing intellectual property and training data.<n>This survey is motivated by the urgent need to understand how the unique characteristics of cloud, edge, and federated deployments shape attack vectors and defense requirements.<n>We systematically examine the evolution of attack methodologies and defense mechanisms across these environments, demonstrating how environmental factors influence security strategies in critical sectors such as autonomous vehicles, healthcare, and financial services.
arXiv Detail & Related papers (2025-02-22T03:46:50Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Integrating Cybersecurity Frameworks into IT Security: A Comprehensive Analysis of Threat Mitigation Strategies and Adaptive Technologies [0.0]
The cybersecurity threat landscape is constantly actively making it imperative to develop sound frameworks to protect the IT structures.<n>This paper aims to discuss the application of cybersecurity frameworks into the IT security with focus placed on the role of such frameworks in addressing the changing nature of cybersecurity threats.<n>The discussion also singles out such technologies as Artificial Intelligence (AI) and Machine Learning (ML) as the core for real-time threat detection and response mechanisms.
arXiv Detail & Related papers (2025-02-02T03:38:48Z) - Autonomous Identity-Based Threat Segmentation in Zero Trust Architectures [4.169915659794567]
Zero Trust Architectures (ZTA) fundamentally redefine network security by adopting a "trust nothing, verify everything" approach.<n>This research applies the proposed AI-driven, autonomous, identity-based threat segmentation in ZTA.
arXiv Detail & Related papers (2025-01-10T15:35:02Z) - Bringing Order Amidst Chaos: On the Role of Artificial Intelligence in Secure Software Engineering [0.0]
The ever-evolving technological landscape offers both opportunities and threats, creating a dynamic space where chaos and order compete.<n>Secure software engineering (SSE) must continuously address vulnerabilities that endanger software systems.<n>This thesis seeks to bring order to the chaos in SSE by addressing domain-specific differences that impact AI accuracy.
arXiv Detail & Related papers (2025-01-09T11:38:58Z) - Cyber-Physical Security Vulnerabilities Identification and Classification in Smart Manufacturing -- A Defense-in-Depth Driven Framework and Taxonomy [0.0]
Existing solutions fall short in addressing the complex, domain-specific vulnerabilities of manufacturing environments.<n>This paper redefines vulnerabilities in the manufacturing context by introducing a novel characterization based on the duality between vulnerabilities and defenses.<n>We identify and classify vulnerabilities across the manufacturing cyberspace, human element, post-production inspection systems, production process monitoring, and organizational policies and procedures.
arXiv Detail & Related papers (2024-12-29T11:41:06Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Model-Driven Security Analysis of Self-Sovereign Identity Systems [2.5475486924467075]
We propose a model-driven security analysis framework for analyzing architectural patterns of SSI systems.
Our framework mechanizes a modeling language to formalize patterns and threats with security properties in temporal logic.
We present typical vulnerable patterns verified by SecureSSI.
arXiv Detail & Related papers (2024-06-02T05:44:32Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.