Position: How Regulation Will Change Software Security Research
- URL: http://arxiv.org/abs/2406.04152v1
- Date: Thu, 6 Jun 2024 15:16:44 GMT
- Title: Position: How Regulation Will Change Software Security Research
- Authors: Steven Arzt, Linda Schreiber, Dominik Appelt,
- Abstract summary: We argue that software engineering research needs to provide better tools and support that helps industry comply with the new standards.
We argue for a stronger cooperation between legal scholars and computer scientists.
- Score: 3.8165295526908243
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Software security has been an important research topic over the years. The community has proposed processes and tools for secure software development and security analysis. However, a significant number of vulnerabilities remains in real-world software-driven systems and products. To alleviate this problem, legislation is being established to oblige manufacturers, for example, to comply with essential security requirements and to establish appropriate development practices. We argue that software engineering research needs to provide better tools and support that helps industry comply with the new standards while retaining effcient processes. We argue for a stronger cooperation between legal scholars and computer scientists, and for bridging the gap between higher-level regulation and code-level engineering.
Related papers
- Continuous risk assessment in secure DevOps [0.24475591916185502]
We argue how secure DevOps could profit from engaging with risk related activities within organisations.
We focus on combining Risk Assessment (RA), particularly Threat Modelling (TM) and apply security considerations early in the software life-cycle.
arXiv Detail & Related papers (2024-09-05T10:42:27Z) - Security Challenges of Complex Space Applications: An Empirical Study [0.0]
I investigate the security challenges of the development and management of complex space applications.
I discuss the four most critical security challenges identified by the interviewed experts: verification of software artifacts, verification of the deployed application, single point of security failure, and data tampering by trusted stakeholders.
I propose future research of new DevSecOps strategies, practices, and tools which would enable better methods of software integrity verification in the space and defense industries.
arXiv Detail & Related papers (2024-08-15T10:02:46Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - An Industry Interview Study of Software Signing for Supply Chain Security [5.433194344896805]
Many cybersecurity frameworks, standards, and regulations recommend the use of software signing.
Recent surveys have found that the adoption rate and quality of software signatures are low.
We interviewed 18 high-ranking industry practitioners across 13 organizations.
arXiv Detail & Related papers (2024-06-12T13:30:53Z) - SoK: A Defense-Oriented Evaluation of Software Supply Chain Security [3.165193382160046]
We argue that the next stage of software supply chain security research and development will benefit greatly from a defense-oriented approach.
This paper introduces the AStRA model, a framework for representing fundamental software supply chain elements and their causal relationships.
arXiv Detail & Related papers (2024-05-23T18:53:48Z) - Software Repositories and Machine Learning Research in Cyber Security [0.0]
The integration of robust cyber security defenses has become essential across all phases of software development.
Attempts have been made to leverage topic modeling and machine learning for the detection of these early-stage vulnerabilities in the software requirements process.
arXiv Detail & Related papers (2023-11-01T17:46:07Z) - Embedded Software Development with Digital Twins: Specific Requirements
for Small and Medium-Sized Enterprises [55.57032418885258]
Digital twins have the potential for cost-effective software development and maintenance strategies.
We interviewed SMEs about their current development processes.
First results show that real-time requirements prevent, to date, a Software-in-the-Loop development approach.
arXiv Detail & Related papers (2023-09-17T08:56:36Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.