An Exploratory Study on the Engineering of Security Features
- URL: http://arxiv.org/abs/2501.11546v2
- Date: Wed, 12 Feb 2025 17:25:47 GMT
- Title: An Exploratory Study on the Engineering of Security Features
- Authors: Kevin Hermann, Sven Peldszus, Jan-Philipp Steghöfer, Thorsten Berger,
- Abstract summary: We study how security features of software systems are selected and engineered in practice.
Based on the empirical data gathered, we provide insights into engineering practices.
- Score: 7.588468985212172
- License:
- Abstract: Software security is of utmost importance for most software systems. Developers must systematically select, plan, design, implement, and especially, maintain and evolve security features -- functionalities to mitigate attacks or protect personal data such as cryptography or access control -- to ensure the security of their software. Although security features are usually available in libraries, integrating security features requires writing and maintaining additional security-critical code. While there have been studies on the use of such libraries, surprisingly little is known about how developers engineer security features, how they select what security features to implement and which ones may require custom implementation, and the implications for maintenance. As a result, we currently rely on assumptions that are largely based on common sense or individual examples. However, to provide them with effective solutions, researchers need hard empirical data to understand what practitioners need and how they view security -- data that we currently lack. To fill this gap, we contribute an exploratory study with 26 knowledgeable industrial participants. We study how security features of software systems are selected and engineered in practice, what their code-level characteristics are, and what challenges practitioners face. Based on the empirical data gathered, we provide insights into engineering practices and validate four common assumptions.
Related papers
- Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - A Taxonomy of Functional Security Features and How They Can Be Located [12.003504134747951]
We present a study of security features in the literature and their coverage in popular security frameworks.
We contribute a taxonomy of 68 functional implementation-level security features including a mapping to widely used security standards.
arXiv Detail & Related papers (2025-01-08T12:17:30Z) - The Good, the Bad, and the (Un)Usable: A Rapid Literature Review on Privacy as Code [4.479352653343731]
Privacy and security are central to the design of information systems endowed with sound data protection and cyber resilience capabilities.
Developers often struggle to incorporate these properties into software projects as they either lack proper cybersecurity training or do not consider them a priority.
arXiv Detail & Related papers (2024-12-21T15:30:17Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - A Developer-Centric Study Exploring Mobile Application Security Practices and Challenges [10.342268145364242]
This study explores the common practices and challenges that developers face in securing their apps.
Our findings show that developers place high importance on security, frequently implementing features such as authentication and secure storage.
We envision our findings leading to improved security practices, better-designed tools and resources, and more effective training programs.
arXiv Detail & Related papers (2024-08-16T22:03:06Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - The Last Decade in Review: Tracing the Evolution of Safety Assurance
Cases through a Comprehensive Bibliometric Analysis [7.431812376079826]
Safety assurance is of paramount importance across various domains, including automotive, aerospace, and nuclear energy.
The use of safety assurance cases allows for verifying the correctness of the created systems capabilities, preventing system failure.
arXiv Detail & Related papers (2023-11-13T17:34:23Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.