Incorporating Verification Standards for Security Requirements Generation from Functional Specifications
- URL: http://arxiv.org/abs/2505.11857v1
- Date: Sat, 17 May 2025 05:47:46 GMT
- Title: Incorporating Verification Standards for Security Requirements Generation from Functional Specifications
- Authors: Xiaoli Lian, Shuaisong Wang, Hanyu Zou, Fang Liu, Jiajun Wu, Li Zhang,
- Abstract summary: F2SRD (Function to Security Requirements Derivation) is an automated approach that proactively derives security requirements (SRs) from functional specifications.<n>F2SRD operates in two main phases: first, we develop a VR retriever trained on a custom database of FR and VR pairs, enabling it to adeptly select applicable VRs from ASVS.<n>Second, these VRs are used to construct structured prompts that direct GPT4 in generating SRs.
- Score: 12.428384271131407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the current software driven era, ensuring privacy and security is critical. Despite this, the specification of security requirements for software is still largely a manual and labor intensive process. Engineers are tasked with analyzing potential security threats based on functional requirements (FRs), a procedure prone to omissions and errors due to the expertise gap between cybersecurity experts and software engineers. To bridge this gap, we introduce F2SRD (Function to Security Requirements Derivation), an automated approach that proactively derives security requirements (SRs) from functional specifications under the guidance of relevant security verification requirements (VRs) drawn from the well recognized OWASP Application Security Verification Standard (ASVS). F2SRD operates in two main phases: Initially, we develop a VR retriever trained on a custom database of FR and VR pairs, enabling it to adeptly select applicable VRs from ASVS. This targeted retrieval informs the precise and actionable formulation of SRs. Subsequently, these VRs are used to construct structured prompts that direct GPT4 in generating SRs. Our comparative analysis against two established models demonstrates F2SRD's enhanced performance in producing SRs that excel in inspiration, diversity, and specificity essential attributes for effective security requirement generation. By leveraging security verification standards, we believe that the generated SRs are not only more focused but also resonate stronger with the needs of engineers.
Related papers
- Provably Secure Retrieval-Augmented Generation [7.412110686946628]
This paper proposes the first provably secure framework for Retrieval-Augmented Generation (RAG) systems.<n>Our framework employs a pre-storage full-encryption scheme to ensure dual protection of both retrieved content and vector embeddings.
arXiv Detail & Related papers (2025-08-01T21:37:16Z) - SEC-bench: Automated Benchmarking of LLM Agents on Real-World Software Security Tasks [11.97472024483841]
SEC-bench is the first fully automated benchmarking framework for evaluating large language model (LLM) agents.<n>Our framework automatically creates high-quality software vulnerability datasets with reproducible artifacts at a cost of only $0.87 per instance.<n>A comprehensive evaluation of state-of-the-art LLM code agents reveals significant performance gaps.
arXiv Detail & Related papers (2025-06-13T13:54:30Z) - LLM Agents Should Employ Security Principles [60.03651084139836]
This paper argues that the well-established design principles in information security should be employed when deploying Large Language Model (LLM) agents at scale.<n>We introduce AgentSandbox, a conceptual framework embedding these security principles to provide safeguards throughout an agent's life-cycle.
arXiv Detail & Related papers (2025-05-29T21:39:08Z) - SafeAgent: Safeguarding LLM Agents via an Automated Risk Simulator [77.86600052899156]
Large Language Model (LLM)-based agents are increasingly deployed in real-world applications.<n>We propose AutoSafe, the first framework that systematically enhances agent safety through fully automated synthetic data generation.<n>We show that AutoSafe boosts safety scores by 45% on average and achieves a 28.91% improvement on real-world tasks.
arXiv Detail & Related papers (2025-05-23T10:56:06Z) - SafeKey: Amplifying Aha-Moment Insights for Safety Reasoning [76.56522719330911]
Large Reasoning Models (LRMs) introduce a new generation paradigm of explicitly reasoning before answering.<n>LRMs pose great safety risks against harmful queries and adversarial attacks.<n>We propose SafeKey to better activate the safety aha moment in the key sentence.
arXiv Detail & Related papers (2025-05-22T03:46:03Z) - A Survey on the Safety and Security Threats of Computer-Using Agents: JARVIS or Ultron? [30.063392019347887]
We present a systematization of knowledge on the safety and security threats of emphComputer-Using Agents.<n> CUAs are capable of autonomously performing tasks such as navigating desktop applications, web pages, and mobile apps.
arXiv Detail & Related papers (2025-05-16T06:56:42Z) - Securing RAG: A Risk Assessment and Mitigation Framework [0.0]
Retrieval Augmented Generation (RAG) has emerged as the de facto industry standard for user-facing NLP applications.<n>This paper reviews the vulnerabilities of RAG pipelines, and outlines the attack surface from data pre-processing to integration with Large Language Models (LLMs)
arXiv Detail & Related papers (2025-05-13T16:39:00Z) - Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.<n>We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.<n>As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - MES-RAG: Bringing Multi-modal, Entity-Storage, and Secure Enhancements to RAG [65.0423152595537]
We propose MES-RAG, which enhances entity-specific query handling and provides accurate, secure, and consistent responses.<n>MES-RAG introduces proactive security measures that ensure system integrity by applying protections prior to data access.<n> Experimental results demonstrate that MES-RAG significantly improves both accuracy and recall, highlighting its effectiveness in advancing the security and utility of question-answering.
arXiv Detail & Related papers (2025-03-17T08:09:42Z) - Privacy-Aware RAG: Secure and Isolated Knowledge Retrieval [7.412110686946628]
This paper proposes an advanced encryption methodology designed to protect RAG systems from unauthorized access and data leakage.<n>Our approach encrypts both textual content and its corresponding embeddings prior to storage, ensuring that all data remains securely encrypted.<n>Our findings suggest that integrating advanced encryption techniques into the design and deployment of RAG systems can effectively enhance privacy safeguards.
arXiv Detail & Related papers (2025-03-17T07:45:05Z) - Formal Verification of PLCs as a Service: A CERN-GSI Safety-Critical Case Study (extended version) [0.0]
This paper introduces a formal verification service for PLC (programmable logic controller) programs compliant with functional safety standards.<n>It offers a cost-effective solution to meet the rising demands for formal verification processes.
arXiv Detail & Related papers (2025-02-26T14:08:58Z) - Dynamic safety cases for frontier AI [0.7538606213726908]
This paper proposes a Dynamic Safety Case Management System (DSCMS) to support both the initial creation of a safety case and its systematic, semi-automated revision over time.<n>We demonstrate this approach on a safety case template for offensive cyber capabilities and suggest ways it can be integrated into governance structures for safety-critical decision-making.
arXiv Detail & Related papers (2024-12-23T14:43:41Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - SoK: A Modularized Approach to Study the Security of Automatic Speech
Recognition Systems [13.553395767144284]
We present our systematization of knowledge for ASR security and provide a comprehensive taxonomy for existing work based on a modularized workflow.
We align the research in this domain with that on security in Image Recognition System (IRS), which has been extensively studied.
Their similarities allow us to systematically study existing literature in ASR security based on the spectrum of attacks and defense solutions proposed for IRS.
In contrast, their differences, especially the complexity of ASR compared with IRS, help us learn unique challenges and opportunities in ASR security.
arXiv Detail & Related papers (2021-03-19T06:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.